About the Project
Stella Academy is a conversational STEM tutor powered by open‑source language models (gpt-oss:20b). It brings learning to life by grounding answers in real‑time NASA space imagery, and can enrich explanations with relevant links via Google Custom Search. We’re launching with a friendly chat experience—with modes for exploring several NASA Open API end points.
Our Inspiration
Science can feel distant and abstract, especially when you can’t see what you’re learning about. We wanted a tutor that talks like a person and connects concepts to the cosmos:
“You see that storm in today’s image of Earth? That’s a perfect example of how pressure gradients work.”
This project is also a celebration of what’s possible with open and accessible tools:
- Open‑source LLMs provide the transparency and flexibility needed to create a responsible educational tool.
- NASA’s Open APIs offer a universe of stunning, publicly funded imagery that makes learning tangible.
- Together, they form a low‑cost and adaptable foundation perfect for classrooms, hobbyists, and anyone curious about the universe.
What It Does Now
- A friendly chatbot that can help with math, physics, and astronomy questions.
- Three distinct learning modes—Explorer (Kid), Cadet (Teen), and Scholar (Uni)—that adjust the tone and depth of the conversation.
- Automatically pulls in NASA images to make explanations more concrete and engaging.
- Optionally uses Google Custom Search (CSE) to add a few helpful links to responses when API keys are provided - OSS generated links tend to hallucinate.
A Quick Look Under the Hood
- Frontend: Built with Next.js and styled with playful GSAP animations. We use Clerk for simple and secure sign‑in.
- AI Models: Open‑source models running locally or through a provider via Ollama.
- Data: Imagery and context sourced from NASA’s Open APIs, including APOD (Astronomy Picture of the Day) and EPIC (Earth Polychromatic Imaging Camera).
- Backend Jobs: A robust queueing system using BullMQ and Redis manages user requests, with the worker on Cloud Run.
- Enrichment: Optional integration with Google CSE lets the worker add relevant, trustworthy links.
- Security: .env.local for development and Google Secret Manager for production secrets.
Key Lessons Learned
- Grounding answers in specific NASA images dramatically reduces abstract explanations and keeps learners engaged.
- A single model can teach in different styles with well‑crafted, role‑based prompts.
- Even a simple queueing system improves UX by keeping the interface responsive during peak loads.
- Running a LLM on a local machine (4090) shows how much memory speed matters as the 120b model would make generation unbearably slow.
Challenges We Overcame
- Consistent formatting: Getting the model to reliably produce clean Markdown/LaTeX was tricky. We tightened prompts and added post‑processing so output renders correctly instead of breaking pages.
- Redis configuration: We learned that BullMQ requires a TLS Redis URI (e.g.,
rediss://…) and won’t work with Upstash’s REST API. Health checks and clearer logging helped us diagnose issues quickly. - API reliability: NASA endpoints can be slow or unavailable at times. We added fallbacks and friendlier error messaging to handle this gracefully.
What’s Next on Our Mission
- Mini‑Lessons (“Missions”): Structured learning paths with scaffolded steps, hints, and quick knowledge checks.
- Richer image interaction: Add notes on images and enable “point‑and‑ask” questions about specific features.
- Adaptive learning: Optional feedback to adjust the pace and depth for each user.
- Flexible model integration: A plug‑in system for different open‑source models by subject.
- Quality upgrades for the default model: Enhance gpt‑oss:20b with RAG or fine‑tuning to improve factual grounding and domain coverage.
Built With
- bullmq
- clerk
- gcp
- gpt-oss
- gsap
- nextjs
- ollama
- redis
- typescript

Log in or sign up for Devpost to join the conversation.