Ashish Patel 🇮🇳’s Post

Day-17 Computer Vision Learning Graph Convolution Network Published by Google Brain Team, Amsterdam 🔸 Originally, it was submitted to 2017 ICLR with more than 6500 Citation 🔸 First author Thomas Kipf is Research Scientist at Google Brain Team. Trends on GCNN: https://lnkd.in/exbmBay ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵: https://lnkd.in/e46_pNw Official Blog: https://lnkd.in/e3jEhMR Official Tensorflow GCN: https://bit.ly/3sFypa7 Pytorch GCN: https://lnkd.in/ed4r-GK Keras GCN: https://lnkd.in/e_KVemY ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 A Graph Convolutional Network, or GCN, is an approach for semi-supervised learning on graph-structured data. It is based on an efficient variant of convolutional neural networks which operate directly on graphs. 🔸 The choice of convolutional architecture is motivated via a localized first-order approximation of spectral graph convolutions. 🔸 The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. #innovation #artificialintelligence #computervision

  • No alternative text description for this image

  • No alternative text description for this image

Ashish, I was just wanting to study such a thing this morning, and now you have provided it! Thanks buddy!

Edge updates = it is a perpendicular where all components are meeting to origin.

🔸 The term ‘convolution’ in Graph Convolutional Networks is similar to Convolutional Neural Networks in terms of weight sharing. The main difference lies in the data structure, where GCNs are the generalized version of CNN that can work on data with underlying non-regular structures. 🔸 The insertion of Adjacency Matrix (A) in the forward pass equation of GCNs enable the model to learn the features of neighboring nodes. This mechanism can be seen as a message passing operation along the nodes within the graph. 🔸 Renormalization trick is used to normalize the features in Fast Approximate Spectral-based Graph Convolutional Networks by Thomas Kipf and Max Welling (2017). 🔸 GCNs can learn features representation even before training

See more comments

To view or add a comment, sign in

Explore content categories