🚀 ClarityPath — Turning AI Into a Thinking Partner

ClarityPath is an AI-powered personalized learning platform that transforms the undergraduate experience from passive task completion into intentional, skill-driven growth.

Problem statement: How can the undergraduate experience become more intentional and problem-solving oriented?


💡 The Problem

Students face:

  • Information overload
  • Lack of structured reflection
  • Passive learning & decision-making
  • AI anxiety (AI seen as a shortcut or threat, not a skill-building partner)

Education is a high-cost, high-commitment investment, yet students often lack visibility into:

  • Opportunity cost
  • Return on time and money
  • Risks of irreversible choices
  • Skills development that actually aligns with career direction

We model the mismatch as a simple “value gap”:

Value Gap = Effort Invested - Strategic Skill Development

ClarityPath reduces this gap by structuring learning, reflection, and problem-solving.


✅ What It Does

ClarityPath:

  • Uses an onboarding quiz to understand a student's mindset, strengths, and uncertainty
  • Generates personalized learning modules tailored to the student
  • Adds reflection prompts and problem-solving challenges (not just content)
  • Generates custom images for each module to reinforce understanding
  • Promotes AI literacy and future-ready skills

Instead of: AI → answers

We built: Student → reflection → personalized modules → skill development


🧠 How We Built It (Two AI Models)

Model 1: Personalized Learning Engine (Local LLM)

Base model: Llama 3.2 3B (Meta open-source model)

Fine-tuning method: QLoRA

Instead of retraining all 3B parameters, QLoRA:

  • Freezes the entire base model
  • Injects small LoRA adapter layers into attention blocks
  • Quantizes base weights to 4-bit so it fits on a free Colab GPU (~15GB VRAM)
  • Trains only the adapters (~1–2% of parameters)

Training data: 20 JSONL examples (system/user/assistant) teaching the model our exact structured JSON output format for:

  • Welcome messages
  • Modules
  • Quizzes
  • Reflection prompts

Training time: ~10 minutes on a free T4 GPU

Export: After training, adapters are merged into the base model, then converted to GGUF with Q4_K_M quantization:

  • Model size: ~1.9GB
  • Keeps ~95% of quality
  • Optimized for CPU inference

Inference: llama-cpp-python + Flask

  • Local CPU inference (no external API)
  • Offline-ready
  • Better privacy for student data

Model 2: Visual Generation Model (Module Images)

The first model generates structured module content and then produces image prompts for our second AI model. The second model generates visuals used directly inside the modules.

This creates: Personalized Cognitive Content + Personalized Visual Layer


🌟 Why This Is Different

We didn’t build a generic chatbot.

We built:

  • A fine-tuned open-source LLM that outputs structured JSON
  • A CPU-deployable offline pipeline (GGUF + llama.cpp)
  • A two-model orchestration system (text engine → image engine)
  • A product focused on reflection + problem-solving, not “answer generation”

📈 Impact

ClarityPath supports:

  • Improved decision-making & media literacy
  • Enhanced future-ready knowledge
  • Better educational value (skills, not just credentials)
  • AI used as a skill-building partner instead of a shortcut

Conceptually: AI as Augmentation


🛠 Tech Stack

  • Frontend: React 18, TypeScript, Vite, Tailwind CSS, Framer Motion, React Router, localStorage
  • Backend: Flask
  • LLM: Llama 3.2 3B (QLoRA fine-tuned)
  • Inference: llama-cpp-python (llama.cpp)
  • Model format: GGUF (Q4_K_M quantization)
  • Training: Google Colab (T4 GPU)

🧩 Challenges We Ran Into

  • Getting consistent structured JSON outputs with a small training set
  • Optimizing for CPU inference (quantization tradeoffs)
  • Prompt routing: ensuring module context becomes clean image prompts
  • Balancing “personalization” without overfitting to limited examples

📚 What We Learned

  • QLoRA makes fine-tuning accessible even without expensive GPUs
  • Quantization + GGUF enables real offline AI products
  • Structured outputs (JSON) are critical for reliable UI integration
  • The best educational AI isn’t “answering”—it’s scaffolding thinking

🚀 What’s Next

  • Expand training dataset (more diverse majors, goals, and learning styles)
  • Add risk-aware career pathway exploration tools
  • Introduce cross-faculty / interdisciplinary modules
  • Pilot with small student cohorts to validate learning impact

👥 Team

SBH Developers

  • Bilal A
  • Haris N
  • Shayan S
  • Taha A
  • Bilal A

Built With

Share this project:

Updates