Skip to content

[TNNLS] A Comprehensive Survey of Awesome Visual Transformer Literatures.

Notifications You must be signed in to change notification settings

liuyang-ict/awesome-visual-transformers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 

Repository files navigation

odyssey

There is a comprehensive list of awesome visual Transformers literatures corresponding to the original order of our survey (A Survey of Visual Transformers) published in IEEE Transactions on Neural Networks and Learning Systems (TNNLS). We will regularly update the latest representaive literatures and their released source code on this page. If you find some overlooked literatures, please make an issue or contact at [email protected].

Content

Original Transformer

Attention Is All You Need. [12th Jun. 2017] [NeurIPS, 2017].
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin.
[PDF] [Github]

Transformer for Classification

1. Original Visual Transformer

Stand-Alone Self-Attention in Vision Models. [13th Jun. 2019] [NeurIPS, 2019].
Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, Jonathon Shlens.
[PDF] [Github]

On the Relationship between Self-Attention and Convolutional Layers. [10th Jan. 2020] [ICLR, 2020].
Jean-Baptiste Cordonnier, Andreas Loukas, Martin Jaggi.
[PDF] [Github]

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. [10th Mar. 2021] [ICLR, 2021].
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
[PDF] [Github]

2. Transformer Enhanced CNN

Visual Transformers: Token-based Image Representation and Processing for Computer Vision. [5th Jun 2020].
Bichen Wu, Chenfeng Xu, Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Zhicheng Yan, Masayoshi Tomizuka, Joseph Gonzalez, Kurt Keutzer, Peter Vajda.
[PDF]

Bottleneck Transformers for Visual Recognition. [2nd Aug. 2021] [CVPR, 2021].
Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, Ashish Vaswani.
[PDF] [Github]

3. CNN Enhanced Transformer

Training data-efficient image transformers & distillation through attention. [15th Jan. 2021] [ICML, 2021].
Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
[PDF] [Github]

ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases [10th Jun. 2021] [ICLR, 2021].
Christos Matsoukas, Johan Fredin Haslum, Magnus Söderberg, Kevin Smith.
[PDF] [Github]

Incorporating Convolution Designs into Visual Transformers [20th Apr. 2021] [ICCV, 2021].
Kun Yuan, Shaopeng Guo, Ziwei Liu, Aojun Zhou, Fengwei Yu, Wei Wu.
[PDF] [Github]

LocalViT: Bringing Locality to Vision Transformers. [12nd Apr. 2021].
Yawei Li, Kai Zhang, JieZhang Cao, Radu Timofte, Luc van Gool.
[PDF] [Github]

Conditional Positional Encodings for Vision Transformers. [22nd Feb. 2021].
Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, Xiaolin Wei, Huaxia Xia, Chunhua Shen.
[PDF] [Github]

ResT: An Efficient Transformer for Visual Recognition. [14th Oct. 2021] [NeurIPS, 2021].
Qinglong Zhang, YuBin Yang.
[PDF] [Github]

Early Convolutions Help Transformers See Better. [25th Oct. 2021] [NeurIPS, 2021 ].
Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollár, Ross Girshick.
[PDF] [Github]

CoAtNet: Marrying Convolution and Attention for All Data Sizes. [15th Sep. 2021] [NeurIPS, 2021].
Zihang Dai, Hanxiao Liu, Quoc V. Le, Mingxing Tan.
[PDF] [Github]

4. Transfomrer with Local Attention

Scaling Local Self-Attention for Parameter Efficient Visual Backbones. [7th Jun. 2021] [CVPR, 2021].
Ashish Vaswani, Prajit Ramachandran, Aravind Srinivas, Niki Parmar, Blake Hechtman, Jonathon Shlens.
[PDF] [Github]

Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. [17th Aug. 2021] [ICCV, 2021].
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
[PDF] [Github]

VOLO: Vision Outlooker for Visual Recognition. [24th Jun. 2021].
Li Yuan, Qibin Hou, Zihang Jiang, Jiashi Feng, Shuicheng Yan.
[PDF] [Github]

Transformer in Transformer. [26th Oct. 2021] [NeurIPS, 2021].
Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu, Yunhe Wang.
[PDF] [Github]

Twins: Revisiting the Design of Spatial Attention in Vision Transformers. [30th Sep. 2021] [NeurIPS, 2021].
Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, Chunhua Shen.
[PDF] [Github]

Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding. [27th May 2021] [ICCV, 2021].
Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu Yuan, Lei Zhang, Jianfeng Gao.
[PDF] [Github]

Focal Self-attention for Local-Global Interactions in Vision Transformers. [1st Jul. 2021].
Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, Jianfeng Gao.
[PDF] [Github]

5. Hierarchical Transformer

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet. [30th Nov. 2021] [ICCV, 2021].
Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zihang Jiang, Francis EH Tay, Jiashi Feng, Shuicheng Yan.
[PDF] [Github]

Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions. [11th Aug. 2021] [ICCV, 2021].
Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao.
[PDF] [Github]

Pvtv2: Improved baselines with pyramid vision transformer. [9th Feb. 2022].
Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao.
[PDF] [Github]

Rethinking Spatial Dimensions of Vision Transformers. [18th Aug. 2021] [ICCV, 2021].
Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh.
[PDF] [Github]

CvT: Introducing Convolutions to Vision Transformers. [29th Mar. 2021] [ICCV, 2021].
Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
[PDF] [Github]

6. Deep Transfomrer

Going deeper with Image Transformers. [7th Apr. 2021] [ICCV, 2021].
Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, Hervé Jégou.
[PDF] [Github]

DeepViT: Towards Deeper Vision Transformer. [19th Apr. 2021].
Daquan Zhou, Bingyi Kang, Xiaojie Jin, Linjie Yang, Xiaochen Lian, Zihang Jiang, Qibin Hou, Jiashi Feng.
[PDF] [Github]

Refiner: Refining Self-attention for Vision Transformers. [7th Jun. 2021].
Daquan Zhou, Yujun Shi, Bingyi Kang, Weihao Yu, Zihang Jiang, Yuan Li, Xiaojie Jin, Qibin Hou, Jiashi Feng.
[PDF] [Github]

Vision Transformers with Patch Diversification. [26th Apr. 2021].
Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, Qiang Liu.
[PDF] [Github]

7. Self-Supervised Transformer

Generative Pretraining from Pixels. [14th Nov. 2020] [ICML, 2020].
Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, Ilya Sutskever.
[PDF] [Github]

MST: Masked Self-Supervised Transformer for Visual Representation. [24th Oct. 2021] [NeurIPS, 2021].
Zhaowen Li, Zhiyang Chen, Fan Yang, Wei Li, Yousong Zhu, Chaoyang Zhao, Rui Deng, Liwei Wu, Rui Zhao, Ming Tang, Jinqiao Wang.
[PDF]

BEiT: BERT Pre-Training of Image Transformers. [15th Jun. 2021] [ICLR, 2021].
Hangbo Bao, Li Dong, Furu Wei · Edit social preview.
[PDF] [Github]

Masked Autoencoders Are Scalable Vision Learners. [11th Nov. 2021].
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
[PDF] [Github]

An Empirical Study of Training Self-Supervised Vision Transformers. [16th Aug. 2021] [ICCV, 2021].
Xinlei Chen, Saining Xie, Kaiming He.
[PDF] [Github]

Emerging Properties in Self-Supervised Vision Transformers. [24th May. 2021] [ICCV, 2021].
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin.
[PDF] [Github]

Self-Supervised Learning with Swin Transformers. [10th May. 2021].
Zhenda Xie, Yutong Lin, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao, Han Hu.
[PDF] [Github]

Transformer for Detection

1. Original Transformer Detector

End-to-End Object Detection with Transformers. [18th May. 2020] [ECCV, 2020].
Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
[PDF] [Github]

Pix2seq: A Language Modeling Framework for Object Detection. [27th Mar. 2022] [ICLR, 2022].
Ting Chen, Saurabh Saxena, Lala Li, David J. Fleet, Geoffrey Hinton.
[PDF] [Github]

2. Sparse Attention

Deformable DETR: Deformable Transformers for End-to-End Object Detection. [18th Mar. 2021] [ICLR, 2021].
Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
[PDF] [Github]

End-to-End Object Detection with Adaptive Clustering Transformer. [18th Oct. 2021] [BMVC, 2021].
Minghang Zheng, Peng Gao, Renrui Zhang, Kunchang Li, Xiaogang Wang, Hongsheng Li, Hao Dong.
[PDF] [Github]

Pnp-detr: towards efficient visual analysis with transformers. [2nd Mar. 2022] [ICCV, 2021].
Tao Wang, Li Yuan, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
[PDF] [Github]

Sparse DETR: Efficient End-to-End Object Detection with Learnable Sparsity. [4th Mar. 2022] [ICLR, 2022].
Byungseok Roh, Jaewoong Shin, Wuhyun Shin, Saehoon Kim.
[PDF] [Github]

3. Spatial Prior

Fast Convergence of DETR with Spatially Modulated Co-Attention. [19th Jan. 2021] [ICCV, 2021].
Peng Gao, Minghang Zheng, Xiaogang Wang, Jifeng Dai, Hongsheng Li.
[PDF] [Github]

Conditional DETR for Fast Training Convergence. [19th Aug. 2021] [ICCV, 2021].
Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
[PDF] [Github]

Anchor DETR: Query Design for Transformer-Based Object Detection. [4th Jan. 2022] [AAAI 2021].
Yingming Wang, Xiangyu Zhang, Tong Yang, Jian Sun.
[PDF] [Github]

Efficient DETR: Improving End-to-End Object Detector with Dense Prior. [3th Apr. 2021].
Zhuyu Yao, Jiangbo Ai, Boxun Li, Chi Zhang.
[PDF]

Dynamic detr: End-to-end object detection with dynamic attention. [ICCV, 2021].
Xiyang Dai, Yinpeng Chen, Jianwei Yang, Pengchuan Zhang, Lu Yuan, Lei Zhang .
[PDF]

4. Structural Redesign

Rethinking Transformer-based Set Prediction for Object Detection. [12th Oct. 2021] [ICCV, 2021].
Zhiqing Sun, Shengcao Cao, Yiming Yang, Kris Kitani.
[PDF] [Github]

You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection. [27th Oct. 2021] [NeurIPS, 2021].
Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
[PDF] [Github]

5. Pre-Trained Model

UP-DETR: Unsupervised Pre-training for Object Detection with Transformers. [7th Apr. 2021] [CVPR, 2021].
Zhigang Dai, Bolun Cai, Yugeng Lin, Junying Chen.
[PDF] [Github]

FP-DETR: Detection Transformer Advanced by Fully Pre-training. [29th Sep. 2021] [ICLR, 2021].
Wen Wang, Yang Cao, Jing Zhang, DaCheng Tao.
[PDF]

6. Matcing Optimization

DN-DETR: Accelerate DETR Training by Introducing Query DeNoising. [2nd Mar. 2022] [CVPR, 2022].
Feng Li, Hao Zhang, Shilong Liu, Jian Guo, Lionel M. Ni, Lei Zhang.
[PDF] [Github]

DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection. [7th Mar. 2022].
Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, Heung-Yeung Shum.
[PDF] [Github]

7. Specialized Backbone for Dense Prediction

Feature Pyramid Transformer. [18th Jul. 2020] [ECCV, 2020].
Dong Zhang, Hanwang Zhang, Jinhui Tang, Meng Wang, Xiansheng Hua, Qianru Sun.
[PDF] [Github]

HRFormer: High-Resolution Vision Transformer for Dense Predict. [7th Nov. 2021] [NeurIPS, 2021].
Yuhui Yuan, Rao Fu, Lang Huang, WeiHong Lin, Chao Zhang, Xilin Chen, Jingdong Wang.
[PDF] [Github]

Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation. [23rd Nov. 2021].
Jiaqi Gu, Hyoukjun Kwon, Dilin Wang, Wei Ye, Meng Li, Yu-Hsin Chen, Liangzhen Lai, Vikas Chandra, David Z. Pan.
[PDF]

Transformer for Segmentation

1. Patch-Based Transformer

Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers. [25th Jul. 2021] [CVPR 2021].
Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H. S. Torr, Li Zhang.
[PDF] [Github]

TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. [8th Feb. 2021].
Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L. Yuille, Yuyin Zhou.
[PDF] [Github]

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. [28th Oct. 2021] [NeurIPS 2021].
Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
[PDF] [Github]

2. Query-Based Transformer

Attention-Based Transformers for Instance Segmentation of Cells in Microstructures. [20th Nov. 2020] [IEEE BIBM 2020].
Tim Prangemeier, Christoph Reich, Heinz Koeppl.
[PDF]

End-to-End Video Instance Segmentation with Transformers. [8th Oct. 2021] [CVPR 2021].
Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, Baoshan Cheng, Hao Shen, Huaxia Xia.
[PDF] [Github]

Instances as Queries. [23rd May 2021] [ICCV 2021].
Yuxin Fang, Shusheng Yang, Xinggang Wang, Yu Li, Chen Fang, Ying Shan, Bin Feng, Wenyu Liu.
[PDF] [Github]

ISTR: End-to-End Instance Segmentation with Transformers. [3rd May 2021].
Jie Hu, Liujuan Cao, Yao Lu, Shengchuan Zhang, Yan Wang, Ke Li, Feiyue Huang, Ling Shao, Rongrong Ji.
[PDF] [Github]

SOLQ: Segmenting Objects by Learning Queries. [30th Sep 2021] [NeurIPS 2021].
Bin Dong, Fangao Zeng, Tiancai Wang, Xiangyu Zhang, Yichen Wei.
[PDF] [Github]

MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers. [12th Jul. 2021] [CVPR 2021].
Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen.
[PDF] [Github]

Segmenter: Transformer for Semantic Segmentation. [2nd Sep. 2021] [ICCV 2021].
Robin Strudel, Ricardo Garcia, Ivan Laptev, Cordelia Schmid.
[PDF] [Github]

Per-Pixel Classification is Not All You Need for Semantic Segmentation. [31st Oct. 2021] [NeurIPS 2021].
Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
[PDF] [Github]

Transformer for 3D Visual Recognition

1. Representation Learning

Point Transformer. [16th Dec. 2020] [ICCV 2021].
Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, Vladlen Koltun.
[PDF] [Github]

PCT: Point cloud transformer. [17th Dec. 2020] [CVM 2021].
Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R. Martin, Shi-Min Hu.
[PDF] [Github]

3DCTN: 3D Convolution-Transformer Network for Point Cloud Classification. [2nd Mar. 2022].
Dening Lu, Qian Xie, Linlin Xu, Jonathan Li.
[PDF]

Fast Point Transformer. [9th Dec. 2021] [CVPR 2022].
Chunghyun Park, Yoonwoo Jeong, Minsu Cho, Jaesik Park.
[PDF]

3D Object Detection with Pointformer. [21th Dec. 2020] [CVPR 2021].
Xuran Pan, Zhuofan Xia, Shiji Song, Li Erran Li, Gao Huang.
[PDF] [Github]

Embracing Single Stride 3D Object Detector with Sparse Transformer. [13th Dec. 2021].
Lue Fan, Ziqi Pang, Tianyuan Zhang, Yu-Xiong Wang, Hang Zhao, Feng Wang, Naiyan Wang, Zhaoxiang Zhang.
[PDF] [Github]

Voxel Transformer for 3D Object Detection. [13th Sep. 2021] [ICCV 2021].
Jiageng Mao, Yujing Xue, Minzhe Niu, Haoyue Bai, Jiashi Feng, Xiaodan Liang, Hang Xu, Chunjing Xu.
[PDF] [Github]

Voxel Set Transformer: A Set-to-Set Approach to 3D Object Detection from Point Clouds. [19th Mar. 2022].
Chenhang He, Ruihuang Li, Shuai Li, Lei Zhang.
[PDF] [Github]

Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point Modeling. [29th Nov. 2021] [CVPR 2022].
Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie zhou, Jiwen Lu.
[PDF] [Github]

Masked Autoencoders for Point Cloud Self-supervised Learning. [13th Mar. 2022] [CVPR 2022].
Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie zhou, Jiwen Lu.
[PDF] [Github]

Masked Discrimination for Self-Supervised Learning on Point Clouds. [21st Mar. 2022].
Haotian Liu, Mu Cai, Yong Jae Lee.
[PDF] [Github]

2. Cognition Mapping

An End-to-End Transformer Model for 3D Object Detection. [16th Sep. 2021] [ICCV 2021].
Ishan Misra, Rohit Girdhar, Armand Joulin.
[PDF] [Github]

Group-Free 3D Object Detection via Transformers. [23rd Apr. 2021] [ICCV 2021].
Ze Liu, Zheng Zhang, Yue Cao, Han Hu, Xin Tong.
[PDF] [Github]

Improving 3D Object Detection with Channel-wise Transformer. [23rd Aug. 2021] [ICCV 2021].
Hualian Sheng, Sijia Cai, YuAn Liu, Bing Deng, Jianqiang Huang, Xian-Sheng Hua, Min-Jian Zhao.
[PDF] [Github]

MonoDTR: Monocular 3D Object Detection with Depth-Aware Transformer. [21st Mar. 2022] [CVPR 2022].
Kuan-Chih Huang, Tsung-Han Wu, Hung-Ting Su, Winston H. Hsu.
[PDF] [Github]

MonoDETR: Depth-aware Transformer for Monocular 3D Object Detection. [28th Mar. 2022] [CVPR 2022].
Renrui Zhang, Han Qiu, Tai Wang, Xuanzhuo Xu, Ziyu Guo, Yu Qiao, Peng Gao, Hongsheng Li.
[PDF] [Github]

DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries. [13th Oct. 2021] [CRL 2022].
Yue Wang, Vitor Guizilini, Tianyuan Zhang, Yilun Wang, Hang Zhao, Justin Solomon.
[PDF] [Github]

TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers. [22nd Mar. 2022] [CVPR 2022].
Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu, Chiew-Lan Tai.
[PDF] [Github]

3. Specific Processing

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers. [19th Aug. 2021] [ICCV 2021].
Xumin Yu, Yongming Rao, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie zhou.
[PDF] [Github]

SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer. [27th Oct. 2021] [ICCV 2021].
Peng Xiang, Xin Wen, Yu-Shen Liu, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Zhizhong Han.
[PDF] [Github]

Deep Point Cloud Reconstruction. [23rd Nov. 2021] [ICLR 2022].
Jaesung Choe, Byeongin Joung, Francois Rameau, Jaesik Park, In So Kweon.
[PDF] [Github]

Transformer for Multi-Sensory Data Stream

1. Homologous Stream with Interactive Fusion

MVT: Multi-view Vision Transformer for 3D Object Recognition. [25th Oct. 2021] [BMVC 2021].
Shuo Chen, Tan Yu, Ping Li.
[PDF]

Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation). [12th Aug. 2021] [ACMM 2021].
Yunzhong Hou, Liang Zheng.
[PDF] [Github]

Multi-Modal Fusion Transformer for End-to-End Autonomous Driving. [19th Apr. 2021] [CVPR 2021].
Aditya Prakash, Kashyap Chitta, Andreas Geiger.
[PDF] [Github]

COTR: Correspondence Transformer for Matching Across Images. [15th Mar. 2021] [ICCV 2021].
Wei Jiang, Eduard Trulls, Jan Hosang, Andrea Tagliasacchi, Kwang Moo Yi.
[PDF] [Github]

Multi-view 3D Reconstruction with Transformer. [24th Mar. 2021] [ICCV 2021].
Dan Wang, Xinrui Cui, Xun Chen, Zhengxia Zou, Tianyang Shi, Septimiu Salcudean, Z. Jane Wang, Rabab Ward.
[PDF]

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers. [15th Mar. 2021] [NeurIPS 2021].
Aljaž Božič, Pablo Palafox, Justus Thies, Angela Dai, Matthias Nießner.
[PDF]

FUTR3D: A Unified Sensor Fusion Framework for 3D Detection. [20th Mar. 2022].
Xuanyao Chen, Tianyuan Zhang, Yue Wang, Yilun Wang, Hang Zhao.
[PDF]

2. Homologous Stream with Transfer Fusion

Multi-view analysis of unregistered medical images using cross-view transformers. [21th Mar. 2021] [MICCAI 2021].
Gijs van Tulder, Yao Tong, Elena Marchiori.
[PDF] [Github]

Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks. [26th Nov. 2020] [CVPR 2021].
Xiaoxiao Long, Lingjie Liu, Wei Li, Christian Theobalt, Wenping Wang.
[PDF] [Github]

Deep relation transformer for diagnosing glaucoma with optical coherence tomography and visual field function. [26th Sep. 2021] [TMI 2021].
Diping Song, Bin Fu, Fei Li, Jian Xiong, Junjun He, Xiulan Zhang, Yu Qiao.
[PDF]

3. Heterologous Stream for Visual Grounding

MDETR - Modulated Detection for End-to-End Multi-Modal Understanding. [26th Apr. 2021] [ICCV 2021].
Aishwarya Kamath, Mannat Singh, Yann Lecun, Gabriel Synnaeve, Ishan Misra, Nicolas Carion.
[PDF] [Github]

Referring Transformer: A One-step Approach to Multi-task Visual Grounding. [6th Jun. 2021] [NeurIPS 2021].
Muchen Li, Leonid Sigal.
[PDF]

Visual Grounding with Transformer. [10th May 2021] [ICME 2022].
Ye Du, Zehua Fu, Qingjie Liu, Yunhong Wang.
[PDF] [Github]

TransVG: End-to-End Visual Grounding with Transformers. [17th Apr. 2021] [ICCV 2021].
Jiajun Deng, Zhengyuan Yang, Tianlang Chen, Wengang Zhou, Houqiang Li.
[PDF] [Github]

Pseudo-Q: Generating Pseudo Language Queries for Visual Grounding. [16th Mar. 2022] [CVPR 2022].
Haojun Jiang, Yuanze Lin, Dongchen Han, Shiji Song, Gao Huang.
[PDF] [Github]

LanguageRefer: Spatial-Language Model for 3D Visual Grounding. [17th Jul. 2021] [ICoL 2021].
Junha Roh, Karthik Desingh, Ali Farhadi, Dieter Fox.
[PDF]

TransRefer3D: Entity-and-Relation Aware Transformer for Fine-Grained 3D Visual Grounding. [5th Aug. 2021] [ACMM 2021].
Dailan He, Yusheng Zhao, Junyu Luo, Tianrui Hui, Shaofei Huang, Aixi Zhang, Si Liu.
[PDF]

Multi-View Transformer for 3D Visual Grounding. [5th Apr. 2022] [CVPR 2022].
Shijia Huang, Yilun Chen, Jiaya Jia, LiWei Wang.
[PDF] [Github]

Human-centric Spatio-Temporal Video Grounding With Visual Transformers. [10th Nov. 2020] [TCSVT 2021].
Zongheng Tang, Yue Liao, Si Liu, Guanbin Li, Xiaojie Jin, Hongxu Jiang, Qian Yu, Dong Xu.
[PDF] [Github]

TubeDETR: Spatio-Temporal Video Grounding with Transformers. [30th Mar. 2022] [CVPR 2022].
Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, Cordelia Schmid.
[PDF] [Github]

4. Heterologous Stream with Visual-Linguistic Pre-Training:

VideoBERT: A Joint Model for Video and Language Representation Learning. [3rd Apr. 2019] [ICCV 2019].
Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, Cordelia Schmid.
[PDF] [Github]

ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. [3th Aug. 2019] [NeurIPS 2019].
Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee.
[PDF] [Github]

LXMERT: Learning Cross-Modality Encoder Representations from Transformers. [20th Aug. 2019] [IJCNLP 2019].
Hao Tan, Mohit Bansal.
[PDF] [Github]

VisualBERT: A Simple and Performant Baseline for Vision and Language. [20th Aug. 2019].
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
[PDF] [Github]

VL-BERT: Pre-training of Generic Visual-Linguistic Representations. [22nd Aug. 2019] [ICLR 2020].
Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai.
[PDF] [Github]

UNITER: UNiversal Image-TExt Representation Learning. [24th Sep. 2019] [ECCV 2020].
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu.
[PDF] [Github]

Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. [13th Apr. 2019] [ECCV 2020].
Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiao-Wei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, Jianfeng Gao.
[PDF] [Github]

Unified Vision-Language Pre-Training for Image Captioning and VQA. [24th Sep. 2019] [AAAI 2020].
Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, Jianfeng Gao.
[PDF] [Github]

ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision. [5th Feb. 2020] [ICML 2020].
Wonjae Kim, Bokyung Son, Ildoo Kim.
[PDF] [Github]

VinVL: Revisiting Visual Representations in Vision-Language Models. [2nd Jan. 2021] [CVPR 2021].
Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao.
[PDF] [Github]

Learning Transferable Visual Models From Natural Language Supervision. [26th Feb. 2021] [ICML 2021].
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
[PDF] [Github]

Zero-Shot Text-to-Image Generation. [24th Feb. 2021] [ICML 2021].
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever.
[PDF] [Github]

Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision. [11th Feb. 2021] [ICML 2021].
Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, YunHsuan Sung, Zhen Li, Tom Duerig.
[PDF] [Github]

UniT: Multimodal Multitask Learning with a Unified Transformer. [22nd Feb. 2021] [ICCV 2021].
Ronghang Hu, Amanpreet Singh.
[PDF] [Github]

SimVLM: Simple Visual Language Model Pretraining with Weak Supervision. [24th Aug. 2021] [ICLR 2022].
ZiRui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, Yuan Cao.
[PDF]

data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language. [7th Feb. 2022].
Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
[PDF] [Github]

More Awesome Transformer Attention Model Lists

cmhungsteve/Awesome-Transformer-Attention

Citation

If you find the listing and survey helpful, please cite it as follows:

@article{liu2023survey,
  title={A survey of visual transformers},
  author={Liu, Yang and Zhang, Yao and Wang, Yixin and Hou, Feng and Yuan, Jin and Tian, Jiang and Zhang, Yang and Shi, Zhongchao and Fan, Jianping and He, Zhiqiang},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2023},
  publisher={IEEE}
}

Releases

No releases published

Packages

No packages published