Where's my Professor?

Inspiration

University attendance is still stuck in the dark ages. Paper sign-in sheets get passed around and signed by friends who didn't show up. QR codes get screenshotted and shared in group chats. Meanwhile, lecturers have zero real-time insight into how engaged their students actually are -- they just see a room of faces and hope for the best. We wanted to build something that makes attendance genuinely fraud-proof while giving lecturers actionable data they've never had before: how are students feeling during class?

What it does

Where's My Professor is a mobile app that verifies lecture attendance through three independent signals -- facial recognition of the lecturer, GPS geofencing, and a selfie emotion scan -- all in a single photo flow. A student opens the app, takes a photo of their lecturer (back camera), and the app automatically snaps a selfie (front camera). In the background, the back photo is run through a face recognition pipeline to verify the lecturer's identity, while the selfie is analyzed by an emotion detection model to gauge the student's engagement. GPS confirms the student is physically within 150 metres of the lecture hall. Only when all three checks pass is attendance recorded. For students, there's a profile dashboard with attendance stats and a weekly timetable pulled from their schedule. For lecturers, there's an analytics dashboard showing average engagement scores, attendance rates, total lectures, active student counts, and a GitHub-style 20-week activity heatmap of classroom sentiment over time.

How we built it

  • Mobile app: React Native with Expo (SDK 54), using react-native-vision-camera for the dual-photo capture flow, expo-location + geolib for GPS geofencing, and @clerk/clerk-expo for authentication. The UI uses a dark purple gradient theme with bottom-tab navigation.
  • API layer: A Hono web framework app deployed as a Cloudflare Worker (cv-worker), handling all routes -- emotion analysis, face search, attendance CRUD, student/lecturer profiles, and schedules.
  • Face recognition: A Python FastAPI microservice using DeepFace with the Facenet512 model and RetinaFace detector to extract 512-dimensional face embeddings. These embeddings are stored and searched in a Weaviate vector database using nearVector queries for real-time lecturer identification.
  • Emotion analysis: A HuggingFace Inference Endpoint running a facial emotion classification model, returning confidence scores across 7 emotions (happy, surprise, fear, angry, sad, neutral, disgust). An LLM (Gemini 3 Flash via OpenRouter) then distils those scores into a single 0-100 engagement number.
  • Database: MongoDB Atlas for all structured data -- attendance records, lectures, lecturers, and students -- with Zod schema validation on every write. Challenges we ran into
  • Cloudflare Workers + MongoDB: The Workers runtime can silently kill TCP sockets between requests, so we had to implement connection pooling with a ping-based health check and automatic reconnection to prevent stale connections from crashing requests.
  • Dual-camera capture flow: Orchestrating back-camera then front-camera in a single smooth interaction was tricky -- the front camera auto-snaps on initialization, and timing issues meant we had to carefully manage camera refs and a pendingFront flag to avoid race conditions. Serverless cold starts: Chaining a Python embedding service, a Weaviate vector search, and a HuggingFace inference endpoint from a Cloudflare Worker meant every request hit multiple external services. We had to tune timeouts aggressively (5s connection, 10s socket) and keep pool sizes minimal to stay within Workers' constraints.
  • Silent error swallowing: Our attendance save was quietly failing because of a stale ngrok URL in the frontend defaults, but the catch block only logged a warning and still showed a success animation -- making it look like everything worked when nothing was being stored.

Accomplishments that we're proud of

A genuinely three-factor attendance verification system (face recognition + GPS + emotion) that's extremely hard to game -- you can't fake being in the room, seeing the lecturer, and having a real face all at once. The lecturer analytics dashboard with a live engagement heatmap gives educators data they've simply never had access to before. The entire backend runs serverless on Cloudflare Workers edge infrastructure, keeping latency low and costs near zero. The dual-camera flow feels seamless to the user -- one tap of the shutter, and both photos are captured and analyzed in parallel within seconds.

What we learned

Vector databases like Weaviate are remarkably effective for face matching -- the nearVector search returns accurate results even with a small training set of photos per person. Composing multiple ML services (DeepFace, HuggingFace, Gemini) into a single request pipeline from a serverless function is powerful but requires careful timeout management and graceful degradation at every step. Error handling matters more than the happy path at a hackathon -- our attendance "worked" in testing but silently failed in production because we didn't surface errors to the user. Clerk made authentication almost trivial to integrate, letting us focus on the core ML and data pipeline instead of auth plumbing.

What's next for Where's My Professor?

Real-time lecture engagement feed: Stream aggregated emotion scores to the lecturer's dashboard live during a lecture, so they can adjust their pace or style on the fly. Trend analytics: Track individual student engagement over a semester to flag students who may be disengaging before it shows up in grades. Multi-institution support: Generalise the geofencing and lecturer database so any university can onboard by uploading staff photos and lecture hall coordinates. Richer fraud detection: Add liveness detection to the selfie to prevent photo-of-a-photo attacks, and cross-reference timestamps to prevent duplicate check-ins. Push notifications: Remind students to check in when they're near a scheduled lecture, and alert lecturers to low-attendance sessions in real time.

Built With

Share this project:

Updates