Inspiration
The inspiration for verif.ai comes from the struggle we've seen our parents — and millions of others — face when trying to discern truth from misinformation online. From manipulated images to AI-generated headlines that spark outrage or fear, it’s increasingly difficult to know what’s real. As digital natives, we felt responsible to build a tool that empowers people to navigate the web more safely and ethically.
What it does
verif.ai is a Chrome Extension that detects AI-generated or manipulated content such as deepfakes, synthetic images, and clickbait headlines, and also flags potentially violent or misleading visuals in real time. It serves as a digital watchdog, promoting ethical content consumption and helping users make informed decisions online.
How we built it
We built a significant portion of the backend by leveraging Google's Gemini and OpenAI models for natural language understanding and multimodal analysis. The frontend is a lightweight Chrome Extension that sends image and text data securely to our backend for classification. We used Flask for the server, integrated safety APIs, and implemented custom filters to detect violent or unethical content with minimal latency.
Challenges we ran into
- Successfully flagged synthetic content with high accuracy during testing.
- Integrated multiple AI models while keeping inference fast enough for a browser plugin.
- Designed a clean, non-intrusive UI that gives users clarity without fear-mongering.
- Took a complex, socially critical issue and made it accessible and actionable for everyday users.
Accomplishments that we're proud of
- Successfully flagged synthetic content with high accuracy during testing.
- Integrated multiple AI models while keeping inference fast enough for a browser plugin.
- Designed a clean, non-intrusive UI that gives users clarity without fear-mongering.
- Took a complex, socially critical issue and made it accessible and actionable for everyday us
What we learned
- Prompt engineering and model tuning are critical for ethical AI applications.
- Transparency in flagging (i.e., showing users why something was flagged) builds trust.
- There’s a huge demand for AI tools that defend users — not just entertain or exploit them.
- Collaboration across design, ethics, and technical implementation is essential for responsible AI.
What's next for verif.ai
In terms of immediate next steps for* verif.ai,* we immediately intend on launching publically on the Chrome Web Store to open up to community feedback on our application and open-source contributions. In the following iterations, there is potential to grow this project by expanding into multilingual that supports globe-wide misinformation detection. Afterwards, this can expand into different media types including audio and media content. Lastly, there is potential to build a leveragable API that allows other platforms to integrate verif.ai into their own safety workflows.
Log in or sign up for Devpost to join the conversation.