Inspiration
We can already change the art style of images, and interpolate between existing styles, but can we interpolate styles across space as well?
What it does
Takes an input image or video and restyles it into two unique styles, interpolated spatially. It can operate on arbitrary input images / videos, and arbitrary style images.
How I built it
I used existing technology for style transfer for images and temporal consistency for videos. I extended style transfer to allow interpolation between styles over space. Then, I combined the new style transfer algorithm with the temporal consistency algorithm to create videos.
Challenges I ran into
One challenge was the reverse engineering for the style transfer system. Luckily, it was written in an easy to extend way, so I was able to add my spatial style interpolation by the end of Day 1. On Day 2, my main challenge was combining the style transfer system with the video temporal consistency system. This took some data wrangling in order to convert between the various formats needed for each system.
Accomplishments that I'm proud of
I'm proud that my conjecture actually worked! It is pretty rare in deep learning that something you think will work actually works...
What I learned
I learned how to transform image and video formats for usage with Numpy/Pytorch.
What's next for Style Transition
The next step would be to combine Style Transition with the HackGT Photo Style booth to enable smooth style transitions with object boundary awareness.
Log in or sign up for Devpost to join the conversation.