Inspiration
Every Christmas and birthday, we struggle to think of a present for our loved ones. For us, clothing is an easy solution. But how do you visualise how they look without ruining the surprise?
What it does
The web app allows you to enter 2 images, an image of you or your loved one, and an image of the new piece of clothing. These input images are sent to a Pix2Pix model to generate a new image, showing how you would hypothetically look with your new clothes.
How we built it
We modified the original Pix2Pix architecture by using AttentionUnet for the generator and training the model from scratch. VITON-HD is a well-known virtual tryon dataset, however, it does not fit our use case so we created our own much smaller dataset inspired by VITON-HD. Since the new dataset is small, we utilised NVIDIA's Adaptive Discriminator Augmentation (ADA) techniques to avoid model collapse.
We planned to fine-tune our generator to generate images outside of VITON-HD domain for a more realistic use cases and for the very cool live demo. However, we ran out of time, so please use our not-yet-finetuned model with lower expectations :)
Challenges we ran into
Integration with the web app. During our sleep-deprived daze we forgot to do integration testing with the web app, running into some issues at the end. We also weren't very familiar with the technology each of the members was using, leading to gaps in knowledge. Turns out project management is hard!
Accomplishments that we're proud of
Training of the model. Instead of using a pre-trained model, everything was trained on-site and during the hackathon.
What we learned
Training GAN is very difficult and time-consuming!
What's next for Virtual Clothes TryOn
A proper front-end rework. Our focus was on the model and no one on the team was a UI expert, leaving much to be desired in the UI.
Log in or sign up for Devpost to join the conversation.