Day-26 Computer Vision Learning ResNeXt, by UC San Diego and Facebook Research Follow me for similar post : 🇮🇳 Ashish Patel 🇮🇳 Interesting Facts : 🔸 It is published in 2017 CVPR, which has already got over 3574 citations 🔸 ResNeXt becomes the 1st Runner Up of ILSVRC classification task. 🔸 The model name, ResNeXt, contains Next. It means the next dimension, on top of the ResNet. ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 : https://lnkd.in/eEZPMSh Official Facebook Code: https://bit.ly/3ccLD88 keras : https://bit.ly/3ccMaXG Tensorflow : https://bit.ly/3a8m8SO Pytorch : https://bit.ly/368qFDN ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 The author first mentioned the fully connected layer, and talked about the inner product. The original text says that, Inner product can be thought of as a form of aggregating transformation. 🔸 Proposed "cardinality", which is described in the original text, increasing cardinality is more effective than going deeper or wider. #artificialintelligence #computervision #innovation
Very nice
For Similar previous post visit github : https://github.com/ashishpatel26/365-Days-Computer-Vision-Learning-Linkedin-Post
𝗥𝗲𝘀𝗡𝗲𝘅𝘁 𝗕𝗹𝗼𝗰𝗸 : A simple neuron in figure, the output is the summation of wi times xi. The above operation can be recast as a combination of splitting, transforming, and aggregating. 🔸 Splitting: the vector x is sliced as a low-dimensional embedding, and in the above, it is a single-dimension subspace xi. 🔸 Transforming: the low-dimensional representation is transformed, and in the above, it is simply scaled: wixi. 🔸 Aggregating: the transformations in all embeddings are aggregated by summation.
Bro you are playing it great