𝗗𝗮𝘆-𝟭𝟳𝟮 Computer Vision Learning 𝗔𝘅𝗶𝗮𝗹-𝗱𝗲𝗲𝗽𝗹𝗮𝗯 : Stand-Alone Axial-Attention for Panoptic Segmentation by Google , Johns Hopkins University Follow me for similar post : 🇮🇳 Ashish Patel Interesting Facts : 🔸 This is a paper in ECCV2020 with over 3997 citations. ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 : https://lnkd.in/eQijq89 code : https://lnkd.in/eA69E37 ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 Convolution uses locality to improve efficiency, but it will lose remote context. The self-attention mechanism has been used to enhance CNN through non-local interaction. Recent work proves that it is possible to stack self-attention layers to obtain a network of full attention by restricting attention to local areas 🔸This paper proposed to eliminate this constraint by decomposing 2D self-attention into two 1D self-attentions. 🔸This reduces computational complexity and allows attention on a larger or even global scale. 🔸Authors also proposed a position-sensitive self-attention design. The combination of the two produces our position-sensitive axial attention layer, which is a novel building block that can be stacked to form an axial attention model for image classification and dense prediction. We proved the effectiveness of our model on four large data sets. #computervision #artificialintelligence #data
👍👍
Prabhat Kumar Madhavi K
One general question... Having so many Algos being learnt everyday how do we select the most optimal depending on application? Is remembering everything only one way?
pramodin.com•1K followers
4yThanks for posting pramodin.com