Ashish Patel 🇮🇳’s Post

𝗗𝗮𝘆-𝟮𝟰𝟲 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗗𝗲𝗲𝗽𝗚𝗖𝗡𝘀: Can GCNs Go as Deep as CNNs? by KAUST (King Abdullah University of Science and Technology), Thuwal, Saudi Arabia Follow me for a similar post: 🇮🇳 Ashish Patel Interesting Facts : 🔸 This paper is published ICCV2019 with 318 citations. ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 : https://lnkd.in/eE5Aui2n code :  Tensorflow: https://lnkd.in/eac_mWFK PyTorch: https://lnkd.in/e7ybCqTf ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 Convolutional Neural Networks (CNNs) achieve impressive performance in a wide variety of fields. Their success benefited from a massive boost when very deep CNN models were able to be reliably trained.  🔸 Despite their merits, CNNs fail to properly address problems with non-Euclidean data. To overcome this challenge, Graph Convolutional Networks (GCNs) build graphs to represent non-Euclidean data, borrow concepts from CNNs, and apply them in training.  🔸 GCNs show promising results, but they are usually limited to very shallow models due to the vanishing gradient problem. As a result, most state-of-the-art GCN models are no deeper than 3 or 4 layers. In this work, we present new ways to successfully train very deep GCNs.  🔸 We do this by borrowing concepts from CNNs, specifically residual/dense connections and dilated convolutions, and adapting them to GCN architectures. Extensive experiments show the positive effect of these deep GCN frameworks.  🔸 Finally, we use these new concepts to build a very deep 56-layer GCN, and show how it significantly boosts performance (+3.7% mIoU over state-of-the-art) in the task of point cloud semantic segmentation.  🔸 We believe that the community can greatly benefit from this work, as it opens up many opportunities for advancing GCN-based research. #computervision #artificialintelligence #machinelearning

  • diagram
See more comments

To view or add a comment, sign in

Explore content categories