Inspiration
Whenever developers ask users for feedback on a software product, they get lots of unstructured feedback that’s hard to turn into actionable changes. We’ve personally gone through the tedious back-and-forth of suggestions and implementations where neither side really knows what the other means. Test users say what they think they like or don’t like, but there’s no fast way to see how they truly feel about changes. We wanted to fix that: make feedback something you can see, aggregate, and turn into real improvements instead of guesswork.
What it does
Synapse makes beta testing scalable by giving each tester a personalized, live sandbox, no local setup. We capture what’s hard to get from words alone: face expression changes (emotion over time), telemetry, mouse tracking, and other interactions. Testers describe changes in plain language or by voice; the app updates in real time in their sandbox. Developers see aggregated data and clear pain points in the dashboard and can create automatic GitHub PRs from one sandbox or merge several into a single PR, with long-term memory so we track preferences and avoid repeating the same changes across testers. In short, we turn a passive feedback loop into an active, real-time development session.
How we built it
We built the platform on Cloudflare’s edge stack. Cloudflare Workers handle all sandbox API logic, file operations, AI orchestration, and the in-sandbox Studio UI. Each sandbox runs in an isolated Cloudflare Container backed by a Durable Object, giving a persistent filesystem and dedicated lifecycle at the edge. Code generation runs through Cloudflare AI, so inference stays next to the sandbox for low-latency, context-aware edits. For long-term memory we integrated Supermemory: we store every prompt and file change per sandbox, creating a persistent knowledge base across sessions and restarts. Before each AI call we retrieve relevant prior changes for coherent, incremental updates, and the same history powers automated pull-request changelogs. We use Google Gemini to refine raw voice and text into a single, clear instruction before calling the code model, so speech and chat map cleanly to edits. In the dashboard, ElevenLabs powers real-time speech-to-text (so testers can talk their feedback) and text-to-speech for AI responses. Hume AI runs in the browser for emotion detection over the webcam stream, and we persist those signals with the rest of the session. Convex manages application state and orchestrates GitHub workflows, repo imports, branching, commits, and pull requests. Authentication is handled by Better Auth, with Resend for transactional email (verification, password reset, and tester invites). Sandboxes are organized under Projects linked to GitHub repositories, enabling direct React app imports and an end-to-end flow from isolated editing to AI-generated PRs.
Challenges we ran into
One challenge was connecting PostHog, a real-time product analytics tool, to our sandbox sessions to improve data collection. After several hours trying to integrate it with the sandbox (embedding in the iframe and wiring events was harder than we expected), we pivoted and built custom analytics instead: emotion-over-time from Hume, click and mouse tracking, and a full transcript tied to the timeline. That gave us exactly the signals we needed for the dashboard and ended up fitting our product better than a generic embed.
A big technical challenge was ElevenLabs transcription being almost too good, it was so verbose that it overloaded our Cloudflare AI model’s context and led to suboptimal code edits. We fixed it by adding a refinement layer with the Gemini API between ElevenLabs output and Cloudflare AI: Gemini strips noise and outputs a single, minimal instruction. That improved performance and latency, cut token use, and made sandbox updates much more accurate for testers.
On the process side, we kept running into large merge conflicts and overwriting each other’s code. A few hours in we started a simple rule: someone would yell “PULL” so everyone pulled and committed often, keeping conflicts small. It was silly but it worked.
Accomplishments that we're proud of
We’re proud of getting isolated Cloudflare Containers (backed by Durable Objects) working with persistent filesystems so each sandbox keeps its state across multiple user testing sessions. We’re also proud of how we integrated Supermemory with the sandboxes and the balance we struck: we had to avoid overloading it with every code edit and prompt while still storing enough to give the AI useful context for the next edit and to build informed PR changelogs and aggregation. Getting that balance right took real iteration.
We knew that as a UI/UX feedback-loop product we’d need our own UI/UX to be strong. We’re proud of how the app looked and felt by the end of the hackathon, design system, dark mode, and a clear path from landing to sandbox to analytics, even though it took a lot of iteration (using our own app would probably have helped 😆).
We’re proud of the solutions we found for the challenges above, from token and context management to refinement pipelines, and that we carved out most of our original scope in 36 hours. We’re also proud of how we worked in incremental MVPs and milestones, which kept our work and time management under control.
What we learned
We learned a lot about building scalable infrastructure after realizing how much we had to account for, isolation, lifecycle, and persistent state, when maintaining secure sandboxes for every beta tester. We also learned how valuable containers are for code distribution and per-tester isolation.
What's next for Synapse
We’re excited about Synapse and have a lot of ideas we didn’t get to in the hackathon. We want a stronger aggregation framework that uses Supermemory context better when merging multiple sandboxes into one PR. We’d also like active screen replay for richer data and admin monitoring; we tried to get it working but ran into conflicts with the sandbox environment and had to drop it, so making it work is a priority. We’re thinking about adding sentiment analysis on the full user transcript (not just video emotion) to pinpoint frustrating features more precisely. Finally, we’d like collaborative sandbox sessions so multiple testers can work in the same sandbox instead of one sandbox per tester. We want Synapse to be the most complete feedback loop for product teams.
Built With
- better-auth
- cloudflare
- convex
- docker
- elevenlabs
- gemini
- github-api
- hume-ai
- javascript
- next.js
- react
- resend
- shadcn
- supermemory
- tailwind-css
- typescript

Log in or sign up for Devpost to join the conversation.