𝗗𝗮𝘆-𝟯𝟴𝟴 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 Deepmind Researchers Propose ‘ReLICv2’: Pushing The Limits of Self-Supervised ResNets by DeepMind Follow me for a similar post: Ashish Patel ------------------------------------------------------------------- 𝗜𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗶𝗻𝗴 𝗙𝗮𝗰𝘁𝘀 : 🔸 Paper: ‘ReLICv2’: Pushing The Limits of Self-Supervised ResNets 🔸 This paper is published arxiv2022. 🔸 Deepmind Researchers Propose 'ReLICv2': Pushing The Limits of Self-Supervised ResNets. It is built on the Representation Learning viaInvariant Causal Mechanisms (RELIC) framework ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 Despite recent progress made by self-supervised methods in representation learning with residual networks, they still underperform supervised learning on the ImageNet classification benchmark, limiting their applicability in performance-critical settings. 🔸Building on prior theoretical insights from Mitrovic et al., 2021, we propose ReLICv2 which combines an explicit invariance loss with a contrastive objective over a varied set of appropriately constructed data views. 🔸ReLICv2 achieves 77.1% top-1 classification accuracy on ImageNet using linear evaluation with a ResNet50 architecture and 80.6% with larger ResNet models, outperforming previous state-of-the-art self-supervised approaches by a wide margin. 🔸Most notably, ReLICv2 is the first representation learning method to consistently outperform the supervised baseline in a like-for-like comparison using a range of standard ResNet architectures. Finally we show that despite using ResNet encoders, ReLICv2 is comparable to state-of-the-art self-supervised vision transformers. #computervision #artificialintelligence #innovation
https://arxiv.org/pdf/2201.05119.pdf