Ashish Patel 🇮🇳’s Post

𝗗𝗮𝘆-𝟭𝟳𝟮 Computer Vision Learning 𝗔𝘅𝗶𝗮𝗹-𝗱𝗲𝗲𝗽𝗹𝗮𝗯 : Stand-Alone Axial-Attention for Panoptic Segmentation by Google , Johns Hopkins University Follow me for similar post :  🇮🇳 Ashish Patel Interesting Facts : 🔸 This is a paper in ECCV2020 with over 3997 citations. ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 : https://lnkd.in/eQijq89 code : https://lnkd.in/eA69E37 ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 Convolution uses locality to improve efficiency, but it will lose remote context. The self-attention mechanism has been used to enhance CNN through non-local interaction. Recent work proves that it is possible to stack self-attention layers to obtain a network of full attention by restricting attention to local areas 🔸This paper proposed to eliminate this constraint by decomposing 2D self-attention into two 1D self-attentions. 🔸This reduces computational complexity and allows attention on a larger or even global scale. 🔸Authors also proposed a position-sensitive self-attention design. The combination of the two produces our position-sensitive axial attention layer, which is a novel building block that can be stacked to form an axial attention model for image classification and dense prediction. We proved the effectiveness of our model on four large data sets. #computervision #artificialintelligence #data

  • diagram
PRAMOD KUMAR GIRI

pramodin.com1K followers

4y

Thanks for posting pramodin.com

Like
Reply
Monika Patel

Big Data|Spark|Data Engineer…3K followers

4y

👍👍

Like
Reply
PRANAV KUMAR pranavkumar.eth

Zensar Technologies6K followers

4y

Prabhat Kumar Madhavi K

Like
Reply
Onkar Mulay

UQ Institute for Molecular…886 followers

4y

One general question... Having so many Algos being learnt everyday how do we select the most optimal depending on application? Is remembering everything only one way?

See more comments

To view or add a comment, sign in

Explore content categories