About the Project

VIE - Poison the data before it gets to them.

https://vie.beauty/

Over 4,000 data brokers scrape photos from social platforms in real time, building permanent biometric databases without consent. We built VIE for the activist who doesn't want their face in a government database and the person leaving a dangerous situation who needs to exist online without being found.

What it does

VIE is a free web tool that makes facial recognition systems unable to identify anyone in your photos — while the image looks completely unchanged to the human eye.

Upload a photo. Our model detects your face in the image, encodes it into a latent representation, applies an imperceptible adversarial perturbation via a VAE, and reconstructs the image. The result looks identical to your family and friends. To a recognition system, it's an entirely different person. You get your protected image plus a privacy score for every face — validated against multiple recognition models.

How we built it

Frontend: React + Tailwind CSS for uploading and per-face privacy scores.

Backend: FastAPI serving our ML pipeline — face detection, VQ-GAN that first encodes into a compact latent space, targeted adversarial perturbation through a secondary latent movement model, and then it is returned to the GAN for photorealistic reconstruction. We validate against DeepFace, CompreFace, and ArcFace [3] to ensure protection generalizes across recognition architectures.

Deployment: Github pages (frontend) + homelab (backend/model serving).

Challenges we ran into

The core tension: how little can you change an image so a human notices nothing but an AI is completely fooled? Too subtle and recognition still matches; too aggressive and the image visibly degrades. Perturbations also had to generalize across recognition architectures simultaneously — defeating one model but not another gives a false sense of security.

Furthermore, training a VQ-GAN in 36 hours proved to be a struggle. Finding the idea and downloading the data took the first night. Then unzipping the 100+ GB dataset took over 5 hours in total (we started developing the models far before this was finished). With the codebook constantly collapsing or topology not coherent enough to perturb into logical spaces, making the VQ-GAN took nearly 18 hours of development on the H100s that were graciously provided by RCAC.

Accomplishments that we're proud of

Existing tools like Fawkes [1] and LowKey [2] only protect a single face against one or a small ensemble of models, or attempt to poison training data. VIE protects photos against every model, even unseen models as it isn’t designed to exploit flaws in weaker open-source models, but to warp the very structure of your face. We also provide a transparent privacy score rather than asking users to trust blindly. And the full product is a free website — no installs, no technical knowledge required.

Bought a domain for this project specifically and deployed. For maximum speed and performance of the process, we used one of our homelab’s GPUs resulting in speed increases of more than 8x on average.

For testing our poisoned testing, it connects with two open source face similarity face detectors: DeepFace and InsightFace. Each model behaved differently based on the input, like how much face should be used as an input. Therefore, for two models, it uses different paddings for the optimal testing result.

Built cool storytelling UI and animation for landing page and tool page after hours of planning, surveying, and researching. While at most Hackathon projects, the team solely focuses on building features, we also focus on making a fully finished product, which we can be proud to share to the public.

What we learned

We learned a lot about Deep learning through this project. While some of us had experience working with trajectory diffusion, image classifiers and vision transformers, this was a significant step up from what any of us had ever actually built, in one of the shortest time periods as well.

The next main struggle was to have an intuitive yet good looking website design. While web design wasn’t the most technically challenging, it was definitely the hardest to polish to the point where we were satisfied.

Finally, hosting all of the services together was a challenge. Between domains pointing to github pages that then points to other fastapi backends, it took a while to find a system that felt like it flowed in one piece.

What's next for VIE

A public API so developers can integrate VIE into their own apps, a browser extension that protects images automatically before they leave your device, and batch processing for entire photo libraries.

Further fine tuning and training of the model on new/different images to increase fidelity would also be beneficial to reconstruction accuracy.

[1] Shan et al., "Fawkes: Protecting Privacy against Unauthorized Deep Learning Models," USENIX Security, 2020.

[2] Cherepanova et al., "LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition," ICLR, 2021.

[3] Deng et al., "ArcFace: Additive Angular Margin Loss for Deep Face Recognition," CVPR, 2019.

Share this project:

Updates