Inspiration

The best ideas come from seeing something you can't see alone. Traditional AI chat interfaces give you one voice, one perspective, with tunnel vision design. We wanted to build a tool where you could speak an idea and watch it refract into multiple viewpoints — like a prism splitting light. Parallax was born from the question: what if brainstorming felt less like talking to an assistant and more like rapidly debating with a team?

What it does

Parallax is an audio-native AI exploration canvas. You speak or type a topic, and four AI personas — Expansive, Analytical, Pragmatic, and Socratic — help you explore it from different angles. Ideas branch visually across six paths (clarify, go deeper, challenge, apply, connect, surprise), building a living map of your thinking. You can talk directly to any node through voice commands, switch perspectives mid-exploration, and synthesize everything into a structured report or plan.

How we built it

React 19 + TypeScript + Vite for the frontend, with Zustand for state management and XY Flow for the canvas. LLM generation routes through Mistral (low latency models) with per-persona model configuration. Voice integration uses Higgs ASR V3.0 + TTS V2.5 (via Eigen AI) for speech-to-text and text-to-speech, and Higgs Audio Understanding V3.5 (via Boson AI) for voice-driven canvas commands — speak to a node to branch, promote, or start a dialogue. All data persists locally in IndexedDB.

Challenges we ran into

Making streaming JSON render progressively (not as raw text) required building an incremental parser that extracts structured content mid-stream. Getting the radial branching menu to feel spatial and intuitive — not cluttered — took multiple iterations. Routing voice commands to the correct canvas node while maintaining context from the exploration graph was a complex state management challenge.

Accomplishments that we're proud of

The voice-to-canvas loop feels magical — you right-click a node, hold the mic, speak a command, and watch the canvas respond. The four personas genuinely produce different thinking styles, not just different wording. And the progressive streaming display makes AI generation feel alive rather than jarring.

What we learned

Multi-perspective AI is more than a prompt trick — the persona system meaningfully changes how users explore ideas when each voice has a distinct character. Voice-first UX requires rethinking interaction patterns that were designed for clicks. And sometimes the best design decision is removing features (we killed the quadrant view and multi-lane system to focus on what actually worked).

What's next for Parallax

Collaborative sessions where multiple people explore the same canvas with different personas. Deeper voice integration — full conversational flow with any node, not just commands. Export to popular planning tools. And a mobile experience where voice becomes the primary input for brainstorming on the go.

Built With

Share this project:

Updates