Inspiration

Current LMS platforms (Canvas, Blackboard, Moodle) were built to upload content and post grades, not to support the reality of student life: overlapping deadlines, confusing forums, slow feedback, and opaque grades. We wanted to answer one question: “If we designed an LMS around student wellbeing and instructor workload from day one, what would it look like—with AWS as the backbone?” Prism turns the LMS into a student-life copilot: it doesn’t just show information, it helps students and instructors decide what to do next.

What it does

Prism is a Canvas-class LMS with a built-in AI layer. It includes a full core experience—dashboard, unified calendar, course shells (modules, assignments, quizzes, discussions, files, pages), and both a student grade view and an instructor gradebook with a SpeedGrader-style interface. Everything is designed to feel familiar enough to adopt quickly, but smarter under the hood.

On top of this, Prism adds intelligence: a semantic forum that groups similar questions into “megathreads” and visualizes confusion as a “Bubble View,” an AI grading co-pilot that drafts rubrics and feedback while flagging inconsistent TA grading, and a context-aware course chatbot grounded in real course data. It also surfaces simple “Resource ROI” and at‑risk signals so instructors can see which materials are working and which students may need outreach, with drafted, empathetic messages to send.

How we built it

On the frontend, Prism uses Next.js 15, React 19, TypeScript, Tailwind CSS v4, shadcn/ui, Framer Motion, Three.js/React Three Fiber (for the Bubble View), FullCalendar, Recharts, and TanStack Query/Table to deliver a responsive, modern UI. The backend is built with FastAPI (Python 3.12), organized into routers for courses, assignments, forum, grading, and chatbot, exposing REST APIs consumed by the frontend.

The intelligence is powered by Amazon Bedrock, which handles the course chatbot, forum summarization and clustering, rubric drafting, and grading feedback—always grounded in LMS data rather than the open web. Amazon Aurora PostgreSQL acts as the source of truth for users, enrollments, courses, assignments, quizzes, grades, and rubrics, while Amazon DynamoDB stores high-throughput data like forum threads, megathread groupings, chatbot sessions, and notifications. Typical flows (like a student asking a question on an assignment) read structured context from Aurora and session/forum context from DynamoDB, construct a prompt, and send it to Bedrock for a tailored response.

Challenges we ran into

A major challenge was fitting a realistic “new Canvas” into hackathon scope without losing credibility. We had to carefully choose which workflows to implement end‑to‑end (dashboard, course view, forum, gradebook, chatbot, grading co‑pilot) and which to leave as future work, so the judges see a system that feels usable rather than a thin demo.

We also had to design AI features that assist instructors instead of replacing them, especially around grading. That meant modeling data correctly across Aurora and DynamoDB, keeping Bedrock prompts tightly grounded in real course data, and optimizing both database access patterns and prompt size to keep forum and chatbot interactions low‑latency while remaining trustworthy.

Accomplishments that we're proud of

We are proud that Prism feels like a tool students and instructors could actually use tomorrow, not just a static UI. A student can log in, see a meaningful dashboard, navigate real course shells, participate in a forum, see their grades, and get help from an AI copilot that understands the course context rather than hallucinating.

We’re also proud of how we used Amazon Bedrock: not as a gimmicky chatbot, but as a practical engine for better rubrics, more consistent grading, faster question resolution, and clearer instructor insights. The Bubble View forum, built on DynamoDB‑backed interactions and Bedrock summaries, turns an overwhelming list of posts into a visual map of where the class is struggling.

What we learned

We learned that grounding LLMs in structured data from Aurora and DynamoDB is critical: it dramatically improves reliability, reduces hallucinations, and makes prompts easier to reason about. Once the data spine is solid, adding intelligent behavior like summarization, suggestion, and anomaly detection becomes much more straightforward.

We also saw that the most valuable AI features quietly remove friction instead of adding new complexity. Drafted feedback, summarized threads, suggested outreach messages, and clear confusion maps did more to improve the experience than any single “wow” feature. Finally, we confirmed that student life is a systems problem: real impact comes from connecting deadlines, grading, forums, and analytics into a coherent whole.

What's next for Prism

Next, we want to broaden Prism’s analytics around at‑risk students by incorporating more engagement signals (forum participation, missed work patterns, timing behavior) and tying them more directly to instructor dashboards. This would let teaching teams move from reactive crisis management to proactive support.

We also plan to build an instructor “course coach” that uses trends in Aurora grade data and DynamoDB interactions to suggest which modules, materials, or assessments might need redesign, and to pilot Prism in real classes to measure impact on response times, grading fairness, and drop/fail rates.

Built With

Share this project:

Updates