Elevator Pitch
FaceShield is a full-stack web application that measures how vulnerable a face image is to being learned, replicated, or modeled by artificial intelligence systems, helping people understand and actively reduce that risk through protective cloaking.
Inspiration
Fawkes, a research project which demonstrated that adversarial perturbations can protect facial identity from recognition systems. These techniques are invisible to the human eye but cause machine learning systems to learn incorrect identity representations. The tool remains difficult for non-technical users to understand or use. FaceShield helps translate this research-level privacy defense into an easy, explainable and understandable web application.
What it Does
FaceShield evaluates facial images through four stages:
Face analysis When a user uploads an image, the system detects a face and extracts facial landmarks using MediaPipe Face Mesh. These landmarks represent geometric anchors such as eye corners, nose position, jawline shape, and facial symmetry. AI systems rely heavily on stable geometry and detailed pixel information when learning identities. Images that are front-facing, sharp, and high-resolution typically provide stronger training signals than images that are small, blurred, or partially obscured. The backend then converts uploads into OpenCV images, extracts normalized landmark coordinates, and computes a facial bounding region used for downstream analysis. This stage essentially converts a visual image into measurable structural data.
Vulnerability Scoring FaceShield assigns a 0–100 vulnerability score estimating how easily a face could be modeled by AI systems. Its purpose is to help users understand which visual characteristics increase AI learnability rather than provide absolute protection. The scoring model currently evaluates: Face Size Ratio, larger facial regions contain more identity detail. Sharpness, measured using Laplacian variance to estimate edge clarity. These signals are added up and combined into LOW, MODERATE, or HIGH vulnerability levels with explanations such as: “Large face region (more pixel detail)” “Sharp facial features (clear edges)” This keeps the system interpretable instead of opaque.
Protective Cloaking FaceShield allows users to apply adversarial cloaking to their photos to reduce AI learnability. Fawkes is the primary cloaking mechanism; the fallback method exists only as a demo-safe backup. Fawkes works by creating adversarial perturbations, extremely small pixel adjustments that are unrecognizable to the human eye but significantly change how AI models internally interpret facial features. When someone trains facial recognition using cloaked photos: The AI learns a distorted feature representation, building an incorrect identity model. Later, when shown a real photo of the person, the system fails to match them reliably. The fall-back is a lightweight perturbation (subtle blur + luminance noise).
Protection Impact Analysis Instead of repeating the same score, FaceShield evaluates protection using a second analytical lens: measuring how much machine-relevant signal changed while the image remains visually similar to humans. The system computes: pixel-difference metrics (Mean Squared Error) a derived protection strength score qualitative indicators of AI trainability reduction.
Why I Built It
Much of the conversation around AI risk focuses on large-scale governance, policy or corporate responsibility. While this is important, individuals currently have very little agency over how their digital identity is used or modeled by AI systems. Images shared online can be collected, reused, and incorporated into training datasets without awareness or consent. FaceShield aims to give users a degree of control over this with the protective cloaking. FaceShield also aims to empower people through understanding. Transparency builds trust and instead of claiming that an image is “safe” or “unsafe”, FaceShield reveals the reasoning behind AI perception. Users can experiment, observe changes, and develop intuition about how machines interpret visual identity. The goal is not to replace existing privacy tools, but to create a framework that connects research-level defenses with user-facing understanding.
Log in or sign up for Devpost to join the conversation.