𝗗𝗮𝘆-𝟮𝟴𝟴 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 Do Self-Supervised and Supervised Methods Learn Similar Visual Representations? by Apple Follow me for a similar post: 🇮🇳 Ashish Patel Interesting Facts : 🔸 This paper is published arxiv2021. ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 : https://lnkd.in/eWHPvfEt ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 Despite the success of a number of recent techniques for visual self-supervised deep learning, there remains limited investigation into the representations that are ultimately learned. 🔸By using recent advances in comparing neural representations, we explore in this direction by comparing a constrastive self-supervised algorithm (SimCLR) to supervision for simple image data in a common architecture. 🔸We find that the methods learn similar intermediate representations through dissimilar means, and that the representations diverge rapidly in the final few layers. 🔸We investigate this divergence, finding that it is caused by these layers strongly fitting to the distinct learning objectives. We also find that SimCLR's objective implicitly fits the supervised objective in intermediate layers, but that the reverse is not true. 🔸Our work particularly highlights the importance of the learned intermediate representations, and raises important questions for auxiliary task design. #computervision #artificialintelligence #innovation
Thanks for posting!
Finding #4 is particularly mind boggling, in an overall very interesting work. A LOT has been done and achieved in this field but the speed left many hoods waiting to be opened and check exactly what’s under, and how it can be improved.