Inspiration
Google Maps tells you how to get somewhere. Wayfinder tells you what you'll experience getting there.
We were frustrated that every navigation app treats a journey as just a start and end point. But anyone who's ridden the 130 bus through the Presidio as eucalyptus trees frame your first glimpse of the Golden Gate knows — the journey IS the destination. We built Wayfinder to capture that.
## What it does
Upload one photo of a place in San Francisco. Wayfinder's vision AI identifies it, plans a public transit route from your current location, and builds a chapter-by-chapter immersive guided journey — not just directions, but a full storytelling experience.
Each segment of your trip becomes a "chapter" with:
- AI narration written as a knowledgeable local friend (specific streets, history, insider tips)
- Mood-matched Spotify soundtrack that evolves from calm departure → energetic transit → celebratory arrival
- Real transit data with actual bus lines, stop counts, and transfer instructions
- Live weather at your destination with best-time-to-visit analysis
- Points of interest you'll pass along the way that most people walk right past
- Audio narration via AI-generated text-to-speech
- Interactive map showing the full route with POI markers
The journey reveals progressively — you click "Begin Your Journey" and step through chapters one at a time, like reading a story about the city.
## How we built it
Frontend: Next.js 16 with a fully custom React UI. No chat framework — we built our own SSE stream parser that handles real-time tool call rendering, progressive chapter reveal with animations, image upload with vision AI, and Spotify iframe embeds. Tailwind CSS for the dark-mode immersive aesthetic.
API Layer: Next.js API route that calls Anthropic's Claude API directly with streaming, managing multi-turn tool calling loops. 5 tools orchestrate the experience: plan_journey,
show_weather_card, show_map_route, generate_journey_chapter, and search_knowledge_base.
Backend: FastAPI Python server with Railtracks multi-agent orchestration (9 function nodes). Integrates Google Maps (geocoding + transit directions), Open-Meteo weather,
DigitalOcean Gradient AI (vision, TTS, image generation), Augment Context Engine (semantic knowledge base), and Spotify.
Knowledge Base: We built a curated San Francisco knowledge base with 15+ documents covering neighborhoods, POIs, transit systems, food, history, and hidden gems — all indexed via Augment's Context Engine for semantic search and RAG-powered narration enrichment.
## Challenges we ran into
- The Vercel AI SDK's
@ai-sdk/anthropicpackage has a bug with Zod 4 schema conversion that drops thetype: "object"field from tool schemas, causing Anthropic's API to reject
every tool call. We solved this by bypassing the SDK entirely and calling the Anthropic API directly with rawfetch, then constructing the SSE stream format by hand. - Balancing the chapter reveal UX — showing all chapters at once felt like a wall of text, but making users wait too long felt frustrating. The progressive "Begin Journey" → "Next
Chapter" flow hit the sweet spot.
- DigitalOcean Gradient's async-invoke pattern for TTS requires polling, which means audio narration loads asynchronously after chapters appear — we made this feel natural by
generating TTS in the background.
## Accomplishments we're proud of
- The journey chapters genuinely feel like a story. The narration knows what direction you're traveling, what you'll see out the window, and what historical context surrounds each
block.
- Real transit data makes every journey actually usable — these aren't hypothetical routes, they're actual bus lines with real departure times.
- The Augment knowledge base enriches narration with details no LLM would know on its own — hidden gems, local tips, neighborhood history.
## What we learned
- Building a custom SSE stream parser that handles multi-turn tool calling with progressive UI updates taught us more about streaming protocols than any tutorial could.
- Integrating 6+ APIs (Anthropic, Google Maps, Open-Meteo, DigitalOcean Gradient, Augment, Spotify) into a cohesive real-time experience requires careful error handling — any one
failure shouldn't break the whole journey.
## What's next for Wayfinder
- Multi-city support beyond San Francisco
- Real-time journey mode with GPS tracking — chapters advance automatically as you travel
- Journey sharing — send your route to friends as an interactive story
- Cross-journey memory — "Show me all the sunset spots I've saved across all my journeys"
Built With
- anthropic-claude
- augment-context-engine
- digitalocean-gradient-ai
- elevenlabs
- fastapi
- google-maps
- next.js
- open-meteo
- openstreetmap
- python
- railtracks
- react
- spotify
- tailwind-css
- tts
- typescript
- vercel-ai-sdk
Log in or sign up for Devpost to join the conversation.