𝗗𝗮𝘆-𝟯𝟮𝟴 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿 𝗳𝗿𝗼𝗺 𝗡𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗨𝗻𝗶𝘃𝗲𝗿𝘀𝗶𝘁𝘆 𝗼𝗳 𝗦𝗶𝗻𝗴𝗮𝗽𝗼𝗿𝗲 𝗵𝗮𝘀 𝗶𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗲𝗱 𝗠𝗲𝘁𝗮𝗙𝗼𝗿𝗺𝗲𝗿 𝗶𝘀 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗪𝗵𝗮𝘁 𝗬𝗼𝘂 𝗡𝗲𝗲𝗱 𝗳𝗼𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 Follow me for a similar post: 🇮🇳 Ashish Patel ------------------------------------------------------------------- 𝗜𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗶𝗻𝗴 𝗙𝗮𝗰𝘁𝘀 : 🔸 Paper: 𝗠𝗲𝘁𝗮𝗙𝗼𝗿𝗺𝗲𝗿 𝗶𝘀 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗪𝗵𝗮𝘁 𝗬𝗼𝘂 𝗡𝗲𝗲𝗱 𝗳𝗼𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 🔸 This paper is published in arxiv 2021. 🔸 Transformers have gained much interest and success in the computer vision field . Since the seminal work of vision transformer (ViT) [16] that adapts pure transformers to image classification tasks, many follow-up models are developed to make further improvements and achieve promising performance in various computer vision tasks ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. 🔸 Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. 🔸 Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. 🔸 The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design. ------------------------------------------------------------------- #computervision #artificialintelligence #innovation -------------------------------------------------------------------
Oracle•105K followers
4yAmazing Research : https://arxiv.org/abs/2111.11418 Code : https://github.com/sail-sg/poolformer Github : https://github.com/ashishpatel26/365-Days-Computer-Vision-Learning-Linkedin-Post