𝗗𝗮𝘆-𝟮𝟬𝟮 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗧𝗲𝗮𝗰𝗵𝗲𝗿𝘀 𝗗𝗼 𝗠𝗼𝗿𝗲 𝗧𝗵𝗮𝗻 𝗧𝗲𝗮𝗰𝗵: Compressing Image-to-Image Models by Northeastern University and Snap Inc. Follow me for a similar post: 🇮🇳 Ashish Patel Interesting Facts : 🔸 This is a paper in #CVPR2020 with over 15 citations. 🔸 It outperforms CycleGAN, Pix2Pix etc. ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 : https://lnkd.in/eMrJtyn Code : https://lnkd.in/e_SDDv6 ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 Generative Adversarial Networks (GANs) have achieved huge success in generating high-fidelity images, however, they suffer from low efficiency due to tremendous computational cost and bulky memory usage. Recent efforts on compression GANs show noticeable progress in obtaining smaller generators by sacrificing image quality or involving a time-consuming searching process. 🔸 In this work, we aim to address these issues by introducing a teacher network that provides a search space in which efficient network architectures can be found, in addition to performing knowledge distillation. First, we revisit the search space of generative models, introducing an inception-based residual block into generators. 🔸 Second, to achieve target computation cost, we propose a one-step pruning algorithm that searches a student architecture from the teacher model and substantially reduces the searching cost. It requires no ℓ1 sparsity regularization and its associated hyper-parameters, simplifying the training procedure. 🔸 Finally, we propose to distill knowledge through maximizing feature similarity between teacher and student via an index named Global Kernel Alignment (GKA). Our compressed networks achieve similar or even better image fidelity (FID, mIoU) than the original models with much-reduced computational cost, e.g., MACs. #computervision #artificialintelligence #deeplearning
Very useful