Inspiration
We built InTouch because one of our teamates had experience with a family member who had dementia. Memories were made only to be forgotten. We decided that it doesn't have to be that way anymore, hence we created InTouch. A way to bridge the gap of memory, connecting families and friends once again by allowing an AR-powered solution to people suffering from memory loss and social anxiety.
What it does
InTouch helps someone with memory loss or social anxiety stay oriented in face-to-face conversation. The app uses the camera to recognize (or enroll) a person, then listens via speech-to-text and builds a running transcript. After pauses, it sends the conversation to the backend, which returns a short synopsis (who, relationship, topics, cues) stored in MongoDB, plus suggested replies the user can tap to respond with confidence. A People view surfaces who they’ve talked to and latest synopses. On the Fetch.ai side, InTouch (orchestrator), Reply Curator, and Synthesizer agents can run the same ideas over Agentverse for a decentralized demo path.
How we built it
- Frontend: React + Vite, Zustand for state, camera + face descriptors for match/enroll, Web Speech API for live transcription, Framer Motion / Phosphor for UI.
- Backend: FastAPI on Python 3.12, Motor + MongoDB for faces and conversation memory, Google Gemini for synopsis + curated replies.
- Agents(Fetch.AI UAgent): one user message -> two parallel agent jobs (memory synopsis vs reply ideas) -> one merged response for Agentverse. Named InTouch.
Challenges we ran into
- Production vs local: the live site must call a public API URL, not localhost, and rebuild the client when VITE_* env changes.
- CORS: the API must exactly allow the Vercel Origin (easy to typo env vars or add a trailing slash).
- Render builds: default Python 3.14 tried to compile pydantic_core (Rust) and failed — fixed by Docker pinning 3.12 so wheels install cleanly.
Accomplishments that we're proud of
- End-to-end path: face → speech → AI synopsis + suggestions → persisted memory in MongoDB.
- Dual story: same intelligence over REST for the web app and over Fetch.ai agents for the ASI narrative.
- Shipping to real URLs (Render + Vercel) and debugging real browser security (HTTPS, CORS, mixed content).
What we learned
- It was the first hackathon for one of our teamates, so they were able to gain a lot of experience in building products in high-pressure Environments
- We also had a lot of learning when it came to agent's and making them work with the architecture we had in mind.
What's next for InTouch: Bridging Memory Gaps
- Richer recognition cues (photos, voice notes, recurring themes) and caregiver dashboards.
- Stronger privacy: encryption, consent flows, and optional on-device processing where possible.
- Offline / low-latency modes and notifications for “you spoke with X yesterday about Y.”
- Deeper Fetch.ai integration (e.g. clearer agent marketplace story, metering, multi-user sessions).
- User testing with families and clinicians to validate tone, safety, and real-world adoption.
Log in or sign up for Devpost to join the conversation.