Ashish Patel 🇮🇳’s Post

𝗗𝗮𝘆-𝟯𝟭𝟮 𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗼𝗻 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 Researchers of Massachusetts Institute of Technology, NVIDIA, University of Toronto has Published 𝗘𝗱𝗶𝘁𝗚𝗔𝗡 for High Precision Semantic Image Editing Follow me for a similar post: 🇮🇳 Ashish Patel ------------------------------------------------------------------- 𝗜𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗶𝗻𝗴 𝗙𝗮𝗰𝘁𝘀 : 🔸 Paper: 𝗘𝗱𝗶𝘁𝗚𝗔𝗡: 𝗛𝗶𝗴𝗵-𝗣𝗿𝗲𝗰𝗶𝘀𝗶𝗼𝗻 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗜𝗺𝗮𝗴𝗲 𝗘𝗱𝗶𝘁𝗶𝗻𝗴 🔸 This paper is published arxiv 2021. 🔸 Key features: (1) EditGAN builds on a GAN framework that jointly models images and their semantic segmentations. (2 & 3) Users can modify segmentation masks, based on which we perform optimization in the GAN’s latent space to realize the edit.  (4) Users can perform editing simply by applying previously learnt editing vectors and manipulate images at interactive rates. ------------------------------------------------------------------- 𝗜𝗠𝗣𝗢𝗥𝗧𝗔𝗡𝗖𝗘 🔸 Generative adversarial networks (GANs) have recently found applications in image editing. However, most GAN based image editing methods often require large scale datasets with semantic segmentation annotations for training, only provide high-level control, or merely interpolate between different images.  🔸 Here, we propose EditGAN, a novel method for high quality, high precision semantic image editing, allowing users to edit images by modifying their highly detailed part segmentation masks, e.g., drawing a new mask for the headlight of a car.  🔸 EditGAN builds on a GAN framework that jointly models images and their semantic segmentation, requiring only a handful of labelled examples, making it a scalable tool for editing.  🔸 Specifically, we embed an image into the GAN latent space and perform conditional latent code optimization according to the segmentation edit, which effectively also modifies the image. To amortize optimization, we find editing vectors in latent space that realize the edits.  🔸 The framework allows us to learn an arbitrary number of editing vectors, which can then be directly applied to other images at interactive rates. We experimentally show that EditGAN can manipulate images with an unprecedented level of detail and freedom while preserving full image quality. 🔸 We can also easily combine multiple edits and perform plausible edits beyond EditGAN training data. We demonstrate EditGAN on a wide variety of image types and quantitatively outperform several previous editing methods on standard editing benchmark tasks. #computervision #artificialintelligence #technology

  • No alternative text description for this image

To view or add a comment, sign in

Explore content categories