Inspiration
Inspired by research done by the DAIR team as part of QMIND, we wanted to apply our learning of generative adversarial networks (GANs) to a real-world scenario. During the pandemic, masks play a huge role in keeping everyone safe, but they also obstruct connections with others. During our time in residence, we had a difficult time befriending floormates due to constantly changing health restrictions. Although there were some icebreaker activities provided by residence dons, it was tricky to get to know people when we were not able to put much of a face to a name. It would be much easier to get to know people face-to-face and that made us wonder: what would people look like under their face masks?
What it does
The user uploads an image file containing someone wearing a face mask, where the mask is then removed by our model and the resulting image is displayed on the screen.
How we built it
To create the model, we used the “Correctly Masked Face Dataset” and “Flickr-Faces-HQ” datasets to perform paired image-to-image translation with the Pix2Pix generative adversarial network (GAN). We used Flask to turn our program into a live website for all to use.
How QMask promotes interconnectivity:
QMask allows its users to tie in a sense of humanity to masked figures. Due to barriers restricting physical interactions, it has been difficult to express emotion and form genuine connections with others. Stories are not being told and freedom of expression is limited by generic surgical masks. Our program provides a solution to restore the interconnectivity of our lives. By virtually removing face coverings from faces, users would be able to share their personalities with others around them while still following public health guidelines.
Challenges we ran into
Learning a new technology that was created in recent years was difficult due to the limited resources available online. Training our model on Google Colab required immense GPU usage and disk space, which caused time-consuming workarounds. This hackathon was of many firsts to our team. It was our first time building a project using Flask, Pytorch and GANs.
Accomplishments that we're proud of
Stuart: I am proud of the amount that I learned during this event, and how I successfully implemented my first Pytorch application! Nathan: I am proud of building a functional website that is able to communicate with a backend
What we learned
We learned how to make our own Pytorch dataset We learned the basics of the tools used to build this project (Pytorch, Flask, GANs)
What's next for QMask
Pix2pix GANs are a new technology and as a result, there are visible inaccuracies with our final result. As the development of these networks progresses, we hope to increase our model’s accuracy. We did not have time to fully train our model. We expect that it will be improved with more training. We also found that we were limited by the hardware available. We plan on learning how to utilize Google Cloud in order to better train our model. There are also other types of model architectures for paired image-to-image translation such as Pix2PixHD. We are also interested in designing our own architecture. Expanding our supported file types to videos would enable an even wider range of applications.

Log in or sign up for Devpost to join the conversation.