Ashish Patel 🇮🇳’s Post

𝗗𝗮𝘆-𝟰𝟰𝟮 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 One-Shot Adaptation of GAN in Just One CLIP by KAIST Follow me for a similar post: Ashish Patel  ------------------------------------------------------------------- 𝗜𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗶𝗻𝗴 𝗙𝗮𝗰𝘁𝘀 : 🔸 This paper is published arxiv2022. ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 ➡️ There are many recent research efforts to fine-tune a pre-trained generator with a few target images to generate images of a novel domain. ➡️ Unfortunately, these methods often suffer from overfitting or under-fitting when fine-tuned with a single target image. ➡️ To address this, here we present a novel single-shot GAN adaptation method through unified CLIP space manipulations. Specifically, our model employs a two-step training strategy: ➡️ reference image search in the source generator using a CLIP-guided latent optimization, followed by generator fine-tuning with a novel loss function that imposes CLIP space consistency between the source and adapted generators. ➡️ To further improve the adapted model to produce spatially consistent samples with respect to the source generator, we also propose contrastive regularization for patchwise relationships in the CLIP space. ➡️ Experimental results show that our model generates diverse outputs with the target texture and outperforms the baseline models both qualitatively and quantitatively. ➡️ Furthermore, we show that our CLIP space manipulation strategy allows more effective attribute editing. #computervision #artificialintelligence #technology

  • No alternative text description for this image

I really love your posts Ashish. Personal growth and research work is a very rewarding activity!

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories