Inspiration

In an age where cameras record everything, privacy is vanishing frame by frame. Dashcams, security cameras, and AI datasets all expose faces, license plates, and personal information without consent. We wanted to create a system that could automatically protect privacy not after the fact, but as data is processed.

What it does

Veil is an AI-powered privacy shield that detects and redacts sensitive visuals and text. It blurs faces, license plates, and identifying information in dashcam footage, web content, and AI workflows all processed locally for maximum security.

Veil integrates with:     •    SIM AI – detects sensitive information in plain text, mapping results back to visual frames for pixel-level redaction     •    BrightData – scrapes and sanitizes web content, cleaning sensitive data before it’s reused or retrained     •    Claude MCP – connects directly to Claude AI through the Model Context Protocol, redacting private data before models analyze it

How we built it

We combined YOLOv8 and OpenCV for real-time face and plate detection, and used SIM AI’s NLP models to flag sensitive textual data. BrightData’s webscraper gathered online content for large-scale privacy redaction. Finally, we built an MCP compatible module that allows Claude AI to automatically call Veil for preprocessing ensuring no sensitive data ever leaves the local system.

Everything was developed using Python, JavaScript, Claude MCP integration hooks, and local processing pipelines.

Challenges we ran into     1.    Integrating multiple AI systems (SIM AI, BrightData, Claude MCP) into one cohesive pipeline     2.    Aligning text-based detections from SIM AI with pixel-accurate video redaction     3.    Debugging MCP communication with Claude AI

Accomplishments that we’re proud of     •    Connected Veil to Claude AI through MCP for privacy-safe AI workflows     •    Used BrightData to identify and redact personal data from the web     •    Created a seamless dashcam redaction demo powered by YOLOv8 and OpenCV ⸻

What we learned

We learned how to balance AI innovation with privacy, and how complex privacy engineering becomes at scale. We gained hands on experience with the Model Context Protocol, integrating APIs securely while maintaining user data protection.

What’s next for Veil     •    Collaborate with more LLMs using our MCP pipeline to keep AI privacy-aware     •    Bring Veil to edge devices for real-time, in-camera privacy     •    Develop a privacy-first dataset to help train ethical computer vision models

Built With

Share this project:

Updates