Inspiration

It all started with a TikTok video: a teacher shared how she turned her kid’s doodles into adventure books, which sparked curiosity and made the excitement in the classroom undeniable. ✨ What if we could bring that magic to every child, everywhere?

The Challenge:

  • Kids are at their peak in creativity, but are often bored and unengaged by the traditional method of passive reading.
  • Parents and teachers need ways to keep students engaged in learning and gauge their understanding.

The Solution: Sprout is the future of storytelling, interactive learning, and self expression. Through Sprout kids can:

  • Upload their drawings from their imagination
  • Read a Sprout generated interactive choose-your-own adventure story
  • Be engaged through vivid imagery
  • Learn vocabulary while having fun
  • Get tested on their comprehension, adapted to their grade level

With Sprout, children aren’t just reading stories—they’re living them. They upload their own drawings, personalize their adventures, and build language skills through play. Teachers can tailor stories to lessons, integrate vocabulary, and expand beyond just reading into science, history, and more. Sprout.io is a win-win.

What it does

Sprout starts with your student’s drawing. From there, we build an engaging, choose-your-own story, with generative visuals based on your input. The adventure is filled with grade-level vocabulary to hit comprehension goals, with a quick comprehension check at the end.

  • ✨ Unleash your creativity Draw something magical- characters, a wild adventure, animals, your wildest dreams.
  • 📸 Upload Your Drawing Snap a picture and upload it to Sprout.io.
  • 📚 Pick Reading Level Pick grade level so the story and included vocabulary matches your reading comprehension.
  • 🔎 Choose Your Own Adventure Read as the story unfolds with interactive choices, engaging visuals, and exciting vocabulary blended in along the way. Choose your own adventure
  • 🧠 Test Your Knowledge Take a quiz at the end to see how far you have come and check your comprehension of the vocab words.

How it works

Simply go to: demo select your grade level, upload a drawing of your wildest creations, and click “Begin Your Story”! Enjoy this seamless learning experience and story as it unfolds, with vocab words integrated along the way.

How we built it

Sprout.io was developed using Flutterflow, for a seamless user experience. We integrated several AI-driven APIs:

  • Google Cloud - Gemini Flash 2.0 AI Studio generates dynamic, age-appropriate storylines with the student’s image and a prompt we integrated. We specifically chose Gemini for the Multimodal 2.0 Flash model, which let us combine both visual and text inputs. We also utilized the multi-category content filtering Gemini provides to prioritize safety for kids on this platform.
  • Luma API powers our image generation, bringing stories to life in the most creative way possible. We used the Dream Lab API and tinkered with the reference and styling weights to ensure that we could maintain consistent visual storylines, while also injecting a bit of creativity and unexpected cliff-hangers for students.
  • FlutterFlow for our front-end! This tool was really helpful, since it let us visually diagram our control flow while also integrating our custom FastAPI endpoints for reactive generative AI, all with a seamless integration and clean UI perfect for kids!
  • Imgur API handles quick image uploads within the FlutterFlow platform, passing them along to Luma’s API.
  • Custom FastAPI Endpoints for our text and image generation and modifications, since our control-flow was pretty complex.
  • Google Cloud and Docker, for persistent hosting of the backend solution. We also used ngrok and uvicorn for local testing.
  • Perplexity for help with expanding our prompt engineering with its strong natural language and research capabilities
  • Lots of free food, swag, and entertainment from the Stanford TreeHacks organizers and sponsors! Thank you!

By combining these resources, Sprout creates a high value learning experience with personalized and interactive storytelling. Project link with code

Challenges we ran into

  • Front-End via Flutterflow: At the end of the day, we all know that a technical feat is not very impactful without consideration of the users. While our team had some strong back-end background, we are all beginners to FrontEnd. Thankfully, we found out about FlutterFlow. We spent time understanding how the platform works, figuring out how to add our API calls to our custom FastAPI endpoints, and getting our data formatted, encoded, and displayed onto each page. This took significant time, but led us to create a seamless product that runs on multiple devices, from phones to web screens.

  • Hiding User-Visible Latency: We faced issues with latency of generative AI within our platform. Kids have short attention spans, and our goal is to keep them engaged as long as possible in an educational state of mind. This comes in conflict with the long time frames required for generative AI; for example, we noticed that each Luma Labs API query could take up to 10 seconds, and our Gemeni Flash 2.0 API queries via Studio also took roughly 8 seconds. To fix this issue, we generated possibilities of all future story trajectories 1 chapter ahead, allowing us to use the time the user spends reading to cover part of the future generation time. Even though we will not choose them all, doing some of this work concurrently while students are reading allows us to make our project faster for the end user.

  • Protections & Safety via Gemini Filters: We’ve all seen Generative AI go off-the-rails with its creations, and that is particularly a situation we want to avoid when presenting to younger audiences. As a result, we specifically chose Gemini Flash 2.0 for the API’s adjustable safety filtering weights. We set our levels to explicitly filter out dangerous or harmful content for minors.

  • Maintaining Consistent Context: It's important that our stories balance consistency with creativity as students go through the choose-your-own adventure process, but this can be hard to do with the reactiveness of generative AI, especially when considering both visual and text input AND output. We picked multi-model tools that let us manage context (Gemini Flash 2.0's context capabilities) and focused on maintaining strong references to past information by tuning LumaLabs Dream Lab API's adjustable styling and reference weightage components.

Accomplishments that we're proud of

  • Working together well! Collaboration across our different backgrounds let us connect at the event and build out a really creative idea.
  • Figuring out how to use FlutterFlow from square one! Being adaptive and persistent in debugging
  • Livesharing code and pair programming when things got tough Building a project with technical depth, social impact, and having a fun time through all of it!
  • Our use of multiple different technologies and optimizations for our use case, from our focus on safety, to integrating multi-modal creativity, and even a focus on user-experience with reducing visible latency.

What we learned

We learned a TON about FlutterFlow. Not knowing much about frontend design, FlutterFlow was a great tool that we spent the majority of our hackathon working in. We managed to get our app to work seamlessly cross-platform and in sync with our dynamic backend endpoints, which took a ton of time, but was a great learning experience for creating a full-stack app from nothing but an idea. Special shoutout to the FlutterFlow team for staying late into the night with hackers to help debug ❤️. We learned a ton about front-end and had great conversations. We also developed a ton of API routes in FastAPI, which we then interfaced within FlutterFlow, adding actions to route data across the frontend, and sending and receiving information from our Generative AI models. Working with these generative AI models was very new to us as well, and we picked up a lot on how to use Gemini’s Multimodal 2.0 Flash API and Luma’s API to generate the content we need, doing a ton of prompt engineering along the way. We also learned the value of discussion and diagraming our workflows and API connections using visual tools like excalidraw for a better design process.

What's next for Sprout

We are planning on expanding Sprout.io’s functionality to better benefit teachers when creating personalized learning plans for students!

  • Implement a teacher side flow to better monitor individual student progress, see what kids are drawing, and quiz results for their class
  • In addition, us giving out comprehension quizzes allows us to create tons of meaningful data that can be used for better data-driven, or even AI-driven strategies in the classroom
  • We’d love to add ElevenLabs or similar API to read the text-to-speech to enhance the user experience, especially for kids who struggle with text-reading comprehension. Gemini Flash 2.0 is also releasing audio capabilities at the end of the month; we'd love to integrate them.
  • Bold, highlight, and underline key words throughout the story for better immersion. We were thinking of including Markdown support (just like Devpost!) but didn't have enough time to fully flesh out this idea.
  • Make Sprout.io a more kid focused application in terms of adding animation to their drawings (further Luma integration)
  • Add a wider selection of quiz questions and topics that quizzes can cover!
  • Adding a point system (something like Duolingo's Streaks) to keep kids engaged in learning!

Built With

Share this project:

Updates