Inspiration

Over the last couple of months, there was a lot of focus on ChatGTP & Lensa AI. We were really inspired by Lensa AI and were wondering how do we make it available to masses.

We are from India, and India's per capita income is ~$1800. While this is one side of the story, the other side is that India is home to 650mn smartphone users, most of who recharge their mobile phone data daily. With LensaAI becoming a rage, we realized, not everyone could afford to pay $10 to generate their images. More importantly, the penetration for credit cards in India is just under 3%, while the world average is well over 30%

We thought, why not bring the power of stable-diffusion to AR and if we can show the power of ML to masses.

With our lens, our attempt is draw from the power of ML (and hence AI), transfer it to AR, that Snap users can try for free. We want to give users an experience generate an LensaAI Type image whenever they'd like, without worrying about adding a "credit card" to their smartphone

What it does

The lens is made up of 3 different Face ML + 2 different art styles ( Painting look and cartoon look ), 2 different background looks ( Style transfer based look + cartoon based look ) and finally 4 different overlays (each over lays)

How we built it

Since our aim was to create something similar to stable diffusion + real-time. We decided to use Lens Studio's inbuilt ML models to create multiple looks. To make this experience optimised we made a system that is simple but could create different and intimate changes delivering different looks for the users.

Challenges we ran into

We where not able to make or add more ML models because of the size limitation. While there are Cloud Assets available, but it didn't support the ML models we thought it would. We realized cloud assets support only 2d textures & 3d assets.

We would have loved if LensaAI would give public API access and if we could have built on top of it, to bring it as close to real as we can.

Accomplishments that we're proud of

We where able to create multiple combinations of patterns that created new look. Also we made sure that same pattern didn't repeat again till all the possible patterns are completed. We used the local storage to save already created patterns and that ensured there wasn't a duplication.

What we learned

We really now understood, how to work with remote assets and what features would that support.

What's next for Artify

One day, we will be able to really use LensaAI quality stable-diffusion, that every Snapchat user, can use to generate really LIVE images of themselves, or their friends.

Built With

  • face-ml
  • javascript
  • lensstudio
  • local-storage
  • ml
  • shader-graph
  • shaders
  • snap-ml
  • styletransfer
Share this project:

Updates