Inspiration
Social media is full of content that looks polished and convincing, but it is getting harder to tell what is real, what is AI-generated, and what is misleading. We wanted to build something that meets people where misinformation actually spreads: inside the apps they already use. InstaGuard was inspired by the idea that trust and verification should be built into the browsing experience, not left as an afterthought.
What it does
InstaGuard is a Chrome extension that scans Instagram reels and YouTube shorts for signs of AI-generated media and misinformation. It analyses images, video frames, and accompanying text, then gives users a simple risk result with more detailed reasoning if they want it. It also keeps a running profile-level reliability view based on previously analysed posts, helping users spot patterns rather than judging content in isolation.
How we built it
We built InstaGuard as a Chrome extension using Manifest V3, with a service worker handling background processing and content scripts injecting scan actions directly into Instagram and YouTube. For analysis, we used Gemini-based multimodal prompts to inspect media and captions for AI artefacts, misleading claims, and general trust signals. We also added caching with chrome.storage.local to avoid repeated scans, plus a lightweight popup for API key management and extension controls.
Challenges we ran into
One of the biggest challenges was balancing speed, cost, and reliability in media analysis. This was especially difficult for video, where we needed to extract enough meaningful frames to make the detection accurate without making scans feel slow or too expensive to run. Finding that tradeoff between user experience and analysis quality took a lot of iteration.
Accomplishments that we're proud of
We are proud that InstaGuard works directly inside the user’s normal social media flow instead of as a separate tool. The one-click scan experience, visual warning system, and account-level trust scoring make the project feel practical and usable rather than just a proof of concept. We are also proud that we brought together browser extension engineering, multimodal AI analysis, and platform-specific scraping into one cohesive product.
What we learned
We learned a lot about the complexity of building for real-world social platforms, especially around navigation, content extraction, and browser extension architecture. On the AI side, we learned that good results depend heavily on prompt design, context formatting, and setting clear scoring criteria for the model. We also learned that trust tools need to be transparent and user-friendly, because people are more likely to use them if the output is understandable and actionable.
What's next for InstaGuard
Next, we want to improve the quality and consistency of detections, expand support to more platforms like TikTok and X, and strengthen the profile reliability system with better historical analysis. We also want to add clearer explanations, stronger fact-check grounding, and a cleaner dashboard for reviewing past scans. Longer term, we see InstaGuard becoming a real-time trust layer for social media that helps people make more informed decisions before they like, share, or believe what they see.
Built With
- css
- elevenlabs
- geminiapi
- html
- javascript
Log in or sign up for Devpost to join the conversation.