Day-32 Computer Vision Learning 𝗦𝗾𝘂𝗲𝗲𝘇𝗲𝗡𝗲𝘁 by DeepScale, University of California, Berkeley and Stanford University Follow me for similar post : 🇮🇳 Ashish Patel Interesting Facts : 🔸 It is published in 2016 #arxiv, which has already got over 3481 citations 🔸 Smaller Convolutional Neural Networks (CNN) require less communication across servers during distributed training. 🔸 Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. 🔸 Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 : https://lnkd.in/etCDUBf caffe : https://bit.ly/3oBBu7Q Pytorch : https://bit.ly/3pCc1wf Keras : https://bit.ly/3jcHJ0D Tensorflow : https://bit.ly/39yW84a ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 SqueezeNet is mainly designed to reduce the number of CNN model parameters . 🔸 Fire Module : The idea is very simple. Conv layer into two layers: squeeze layer + expand layer, each with Relu activation layer. 🔸 Replace the 3x3 convolution kernel with the 1x1 convolution kernel to reduce to parameter by 9X. #artificialintelligence #computervision #data #india
For Previous post you can visit this github : https://github.com/ashishpatel26/365-Days-Computer-Vision-Learning-Linkedin-Post