Ashish Patel 🇮🇳’s Post

𝗗𝗮𝘆-𝟭𝟴𝟯 Computer Vision Learning 𝗦𝗼𝗳𝘁-𝗜𝗻𝘁𝗿𝗼𝗩𝗔𝗘: Analyzing and Improving Introspective Variational Autoencoders by 𝗜𝘀𝗿𝗮𝗲𝗹 𝗜𝗻𝘀𝘁𝗶𝘁𝘂𝘁𝗲 𝗼𝗳 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝘆 Follow me for a similar post:  🇮🇳 Ashish Patel Interesting Facts : 🔸 This is a paper in CVPR 2021 with over 13 citations. 🔸 It outperforms introVAE. ------------------------------------------------------------------- 𝗔𝗺𝗮𝘇𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 : https://lnkd.in/eRMYtbH Code : https://lnkd.in/e77ejjr ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 The recently introduced introspective variational autoencoder (IntroVAE) exhibits outstanding image generations and allows for amortized inference using an image encoder. The main idea in IntroVAE is to train a VAE adversarially, using the VAE encoder to discriminate between generated and real data samples.  🔸 However, the original IntroVAE loss function relied on a particular hinge-loss formulation that is very hard to stabilize in practice, and its theoretical convergence analysis ignored important terms in the loss.  🔸 In this paper, proposed the Soft-IntroVAE, a modified IntroVAE that replaces the hinge-loss terms with a smooth exponential loss on generated samples. This change significantly improves training stability and also enables theoretical analysis of the complete algorithm. 🔸 Interestingly, It shows that the IntroVAE converges to a distribution that minimizes a sum of KL distance from the data distribution and an entropy term.  #computervision #artificialintelligence #data

  • diagram

To view or add a comment, sign in

Explore content categories