CodeMentor AI — CrateHacks Submission

Inspiration

It started with a message in a Discord server at 2:17 AM.

“I’ve been staring at this graph problem for four hours. I’m starting to think I’m just not built for this.”

That message isn’t rare. It’s structural.

Millions practise on LeetCode, Codeforces, HackerRank, and CodeChef. The problems exist. The ambition exists. What’s missing is real-time mentorship at the exact moment frustration becomes self-doubt.

CodeMentor is a working, cross-platform Manifest v3 extension that constrains large language models into Socratic mentors instead of solution generators.

It doesn’t replace thinking.
It reinforces it.


What It Does

CodeMentor AI embeds directly into competitive programming platforms and acts as a structured, non-spoiler AI mentor.

Core Capabilities

Socratic AI Guidance (No Solution Leakage)
The model is engineered never to output working code. It validates reasoning, asks guiding questions, and surfaces structural hints only.

Smart Problem Analysis
Extracts live problem metadata from the DOM and generates intuition-first breakdowns. Multiple approaches are presented (brute-force → optimal) to build algorithmic trade-off awareness.

Progressive Hint Enforcement
Three escalating hints per problem. Each requires engagement. The solution layer is intentionally inaccessible.

Time-Aware Intervention
Tracks time-on-problem and introduces guidance when plateau behaviour is detected.

Privacy-First Architecture

  • User-supplied Gemini key
  • Stored locally in chrome.storage.local
  • Zero backend infrastructure
  • No tracking, no data collection

All inference calls go directly from user → AI provider.


Technical Architecture

Stack: Vanilla JavaScript, HTML5, CSS3
Platform: Chrome Extension (Manifest v3)

1. Content Scripts

  • Platform detection
  • DOM extraction
  • Problem parsing (title, difficulty, constraints)
  • Sidebar injection

Supported platforms:

  • LeetCode
  • Codeforces
  • HackerRank
  • CodeChef

2. Platform Adapter Layer

Each platform implements a shared interface to normalise:

  • Problem structure
  • Metadata shape
  • Extraction logic

This isolates DOM fragility and reduces breakage risk across UI updates.

3. Sidebar UI

  • Chat state management
  • Hint progression logic
  • Timer
  • Per-problem progress persistence
  • Session statistics tracking

4. Background Service Worker (MV3)

  • Message routing
  • AI provider abstraction (Gemini )
  • Strict Socratic prompt enforcement
  • Lifecycle-safe state handling

5. Local Storage Layer

  • API key storage
  • Persistent progress per problem
  • Global learning metrics

No proxy server.
No hidden middleware.
Fully client-side.


Key Technical Challenges

Teaching the AI Not to Solve

LLMs default to producing complete solutions.

We engineered a strict prompt-control system:

  • No working code generation
  • Structural hints only
  • Guided questioning
  • Pattern-based leakage prevention

Over 40+ prompt iterations were required to stabilise non-solution behaviour.

The hardest part was constraining intelligence, not invoking it.


Cross-Platform DOM Extraction

Each platform:

  • Uses different markup structures
  • Updates without notice
  • Exposes inconsistent metadata

We built modular adapters to:

  • Isolate platform fragility
  • Normalise data shape
  • Reduce maintenance surface area

This prevents one platform update from breaking the entire extension.


Manifest v3 Constraints

MV3 eliminates persistent background pages.

We redesigned our messaging architecture to:

  • Respect service worker lifecycle limits
  • Maintain state safely across reloads

Why This Is Different

Most AI coding tools:

  • Generate complete solutions
  • Optimise for speed
  • Replace thinking

CodeMentor:

  • Preserves productive struggle
  • Structures reasoning
  • Encourages articulation of approach
  • Prevents answer-copying by design

Technically, this is not a thin wrapper:

  • 4 platform adapters
  • AI abstraction (Gemini )
  • MV3 lifecycle-compliant architecture
  • Local-only key storage
  • Prompt-control system designed to prevent solution leakage
  • Persistent per-problem learning metrics

Accomplishments

  • Fully working Manifest v3 extension
  • Multi-platform support
  • AI provider integration
  • Zero-backend architecture
  • Privacy-first design
  • Progressive hint enforcement
  • Under 2-minute install time
  • 40+ prompt refinements

Runs entirely client-side.
Installs in minutes.
Works today.


What’s Next

  • Adaptive difficulty calibration
  • Post-solve solution debrief
  • Firefox & Edge support
  • Optional offline inference via WebLLM

CodeMentor turns isolation into guided momentum — without compromising learning integrity.

Built With

Share this project:

Updates