The Problem
Have you ever walked past a broken sidewalk and barely noticed it? Now imagine that same crack is impassable if you're in a wheelchair. A missing curb ramp isn't a minor inconvenience: it's a 15-minute detour added to what should be a 5-minute walk.
These barriers are everywhere, but nobody's keeping score. Someone calls 311, files a report, and then... nothing. The city doesn't know which problems are urgent. Communities don't know they're not alone. Things stay broken.
What if broken accessibility were as visible as a pothole? What if every citizen could report it in seconds and actually watch it get fixed?
Meet Communify
We built a platform where citizens and city officials work together to make cities actually accessible.
Citizens capture. AI categorizes.
Snap a photo of a barrier: a broken sidewalk, missing ramp, blocked entrance, whatever. Our AI figures out what it is, how bad it is, and roughly what it costs to fix. It goes straight on a 3D map with exact GPS coordinates.
Officials get the whole picture.
Instead of scattered 311 calls, city teams see:
- Real reports mapped across their neighborhoods
- Instant alerts when new barriers pop up in their zones
- Severity levels and cost estimates to help prioritize
- Natural language search ("wheelchair access issues near parks")
- A clear workflow: reported → acknowledged → in progress → fixed
- Automated notifications keep everyone in the loop
How we made it work
Frontend: Next.js and React with Tailwind CSS. Mapbox GL for interactive 3D maps where officials can draw responsibility zones and actually see their data come to life.
AI: Google Gemini 2.0 Flash reads images like a human would: identifying barrier types, assessing how bad they are, and ballpark costing them. We built a semantic search layer using LangGraph and FAISS so people can ask questions like "where are the wheelchair issues?" instead of searching keywords.
Backend: Next.js API Routes and Python FastAPI handle the work. MongoDB with geospatial indexes does the heavy lifting on location calculations. Cloudinary serves images fast.
The Details: We handle HEIC image conversion, track uploads in real-time, send instant alerts, and expand search terms automatically so you find what you're looking for even if you phrase it differently.
Real challenges we solved
Getting geospatial calculations right means dealing with weird edge cases—polygons with holes in them, shapes that twist back on themselves. We had to think through all of that.
Gemini's powerful but finicky, getting it to consistently return perfect JSON took careful prompt engineering and smart fallbacks.
Image storage nearly broke us. Base64 images in MongoDB hit the database limit fast. Switching to Cloudinary meant direct uploads from the browser, which made everything snappier.
Building the semantic search felt slow at first. We optimized FAISS indexing to update incrementally and cache smartly, cutting startup time from 30 seconds down to under 5.
What we're actually proud of
🎨 The experience feels polished—dark theme, smooth interactions, works beautifully on everything from phones to desktops
🤖 The AI actually understands—it looks at images and understands what people mean when they search, not just matching keywords
🗺️ The mapping is sophisticated—point-in-polygon calculations, real-time zones, drawing tools that rival actual GIS software
⚡ It's fast—we optimized the whole stack to handle lots of reports without breaking a sweat
🔍 Search that gets what you mean—not just keyword matching, but real semantic understanding
What stuck with us
LangGraph's ReAct pattern makes AI agents actually transparent and debuggable instead of black boxes.
MongoDB's geospatial indexes are powerful—you don't need custom geometry code if you use them right.
AI prompting is seriously an art form. Small tweaks make the difference between consistent, clean outputs and garbage.
Image optimization is a real win. Cloudinary cut our load times by 80% just by not forcing giant base64 strings everywhere.
The little things matter—auto-detecting GPS location, converting those awkward HEIC files—they're what make an app feel native and intuitive instead of hacky.
The Road Ahead
We're thinking about heat maps and trend analysis to spot accessibility problems before they become disasters.
Native mobile apps with offline mode and push notifications would make reporting actually frictionless.
Community features—upvoting, comments, progress photos—would turn this into something people actually engage with.
Connecting to real 311 systems and city maintenance workflows means reports actually turn into work orders.
Multi-language support because accessibility is a global problem, not a local one.
Video submissions for dynamic barriers, like blocked paths that move around.
A public API so other developers can build accessibility tools on top of our foundation.
Accessibility shouldn't be hidden until you suddenly need it. Communify makes it visible, actionable, and impossible to ignore. We're building cities that work for everyone.
Built With
- cloudinary
- faiss
- fastapi
- geojson
- google-gemini-2.0-flash
- google-gemini-embeddings
- heic2any
- langchain
- langgraph
- mapbox-draw
- mapbox-gl-js
- mongodb-atlas
- next.js-16
- numpy
- pymongo
- python
- react-19
- tailwind-css-4
- typescript
- uvicorn
- vercel
Log in or sign up for Devpost to join the conversation.