Inspiration
Every child between 3 and 12 is handed a screen. And almost everything on that screen is designed to be consumed: watched, scrolled, tapped through. Passively. Stories are different. Stories are where children first learn who they could be. But even the best children's books have had the same problem for five hundred years: the child is always on the outside, watching someone else be the hero. We kept coming back to one question: what if they weren't?
What it does
Storyworld is an AI-powered interactive story experience where your child is the main character, by name, by likeness, and by the choices they make. A parent takes a photo and enters their child's name and age (it will be deleted automatically after generating the full story). The child picks a story world. 15 seconds later, they are reading an original story, one that has never existed before, written around them specifically, with their name in the prose and their face in the illustrations. The story is not passive. Every scene contains tappable words that open lore panels: short illustrated entries about the world, written at the child's reading level, each ending with a curiosity question designed to spark conversation. We built Storyworld for the 4 to 12 age band specifically, old enough to read independently, young enough to still believe a dragon might talk back. Research on children's spatial cognition and color psychology shaped every design decision, from the muted warm palette that recedes behind illustrations rather than competing with them, to the calm transitions in the reading experience versus the expressive, bouncy moments reserved for choices and rewards.
How we built it
Frontend: Node.js. Nunito typeface throughout. A warm storybook palette (cream, dusty amber, muted terracotta) with a subtle paper grain texture and painterly card surfaces. Backend: Supabase. Three core endpoints: story generation, branch interaction, and avatar generation. In-memory for the demo, designed to extend to persistent storage with appropriate parental consent flows post-hackathon.
AI layer: Claude API powering three distinct functions. First, story generation from a structured JSON schema with hard cognitive-load rules baked into the system prompt, where sentence length, vocabulary complexity, scene count, and Flesch-Kincaid targets all vary by age band.
Second, lore generation: every tappable word's blurb, illustration prompt, and curiosity question is pre-generated at story creation time, so every tap is instant with zero latency. The photo-to-avatar pipeline generates a stylized hero portrait anchored to the child's likeness and held consistent across every scene image. All story content was authored with age-appropriate safety constraints at the prompt level.
Photos are processed in memory and deleted immediately after generation. Persistent storage for future versions would require explicit parental consent mechanisms, and we designed for that from the start rather than as an afterthought.
Challenges we ran into
Merge conflicts. Four engineers, one day, one repo. Parallel work converges all at once, and it did. We froze features earlier than felt comfortable and stabilized what we had rather than chasing what we wanted. Getting it right on iPad. Storyworld was always meant to be held, a child and a parent, an iPad between them. The gap between "works in a browser" and "feels right in a child's hands on a tablet" is wider than most web demos reveal. SwiftUI would have given us this natively. Building it on the web in a day meant real trade-offs. We closed most of the gap. We know exactly where the rest of it is. Maintaining image consistency is a key technical challenge, as current AI image generation models lack persistent memory across inference calls. We are working around this through structured prompt engineering.
Accomplishments we're proud of
The lore system. The idea that a child can tap any highlighted word and step briefly sideways into the world, not a definition but a story, an image, a question, and then step back into the narrative without losing their place. That interaction loop felt genuinely new to us and genuinely right for this age group.
What we learned
Building for children exposed every crack in our system that we would have let slide for an adult user. When the avatar wasn't quite consistent between scenes, an adult would shrug. A child would notice immediately, and there is no technical explanation that satisfies a seven-year-old. Consistency isn't a nice-to-have. It's the whole illusion. Testing with real kids also humbled us in a different way. They have no patience for loading states, no tolerance for confusion, and no interest in recovering gracefully from errors. If something feels wrong for even a second, they're gone. Every bug we thought was minor turned out to matter. We learned that the bar for "good enough" is completely different when your user hasn't learned yet to lower their expectations.
Log in or sign up for Devpost to join the conversation.