Ashish Patel 🇮🇳’s Post

Day-67 Computer Vision Learning Rethinking ImageNet Pre-training (Object Detection, Semantic Segmentation) by Facebook AI Research (FAIR) Follow me for similar post : 🇮🇳 Ashish Patel Interesting Facts : 🔸 It is published in 2019 ICCV, which has already got over 366 citations. 🔸 Training From Scratch Not Worse Than ImageNet Pre-Training. 🔸 Pre-training have been used over training from scratch for many papers. However, is the pre-trained knowledge really useful when transferred to other computer vision tasks? ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 : https://lnkd.in/ejeVtVC Official Code : https://bit.ly/3v2Qdgk ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 Training from random initialization is surprisingly robust, the results hold even when: (i) using only 10% of the training data, (ii) for deeper and wider models, and (iii) for multiple tasks and metrics. 🔸 ImageNet pre-training speeds up convergence early in training, but does not necessarily provide regularization or improve final target task accuracy.

  • No alternative text description for this image
See more comments

To view or add a comment, sign in

Explore content categories