Skip to content

EaswarGn/Question-Craft

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 Question Craft

Question Craft is an AI-powered application that automatically generates high-quality multiple-choice questions. Given a sample question, the system uses Llama 2 to create a new question that mirrors the topic, style, and difficulty of the original—making it ideal for practice, assessment design, and educational tooling.


✨ Key Features

  • 🎯 Style- and difficulty-aware question generation
  • 🔍 Real-time factual verification using web search
  • ⚡ Instant feedback on answer selection
  • 🔐 Secure authentication and session management
  • 🌐 Fully decoupled frontend–backend architecture

🔄 How It Works

Question Craft combines a lightweight web frontend with a modular, AI-driven backend that handles generation, verification, and answer validation.


🖥️ Frontend

The frontend is built with standard web technologies (HTML, CSS, JavaScript) and focuses on clarity, speed, and ease of use.

  1. Input
    On sampleInput.html , users submit:

    • Topic
    • Question prompt
    • Four answer choices
    • Correct answer
  2. Serialization
    The input is packaged into a structured JSON object and stored temporarily in browser local storage.

  3. Backend Routing
    After redirecting to questionPage.html, the frontend:

    • Calls a Vercel API endpoint (/api/getLinks)
    • Retrieves the active backend URL from Upstash Redis
    • Sends the question JSON to the backend for processing
  4. Interaction & Feedback
    A loading screen is displayed during generation. Once complete, users can answer the new question and receive immediate correctness feedback.


⚙️ Backend

The backend runs in a Python environment and is orchestrated via a Jupyter Notebook
(langchain-experimentation-2-main.ipynb) that launches a Flask server.

🏗️ Infrastructure

  • 🌍 ngrok exposes the local Flask server to the public internet
  • 🧭 On startup, the server registers its ngrok URL in Upstash (Redis) for service discovery
  • 🔐 Authentication is handled via Vercel serverless functions, using:
    • bcryptjs for password hashing
    • Cookies for session management
    • Redis for persistent user data

🤖 AI Question Generation Pipeline (LangChain)

The backend uses a multi-step LangChain pipeline to ensure both creativity and correctness:

Step 1: Question Generation

  • A few-shot prompt is constructed using the user’s sample question
  • The meta-llama/Llama-2-13b-chat-hf model generates:
    • A new question
    • Four answer choices
    • A proposed correct answer

Step 2: Factual Verification

  • The generated question is sent to the Tavily Search API
  • Relevant real-world context and supporting facts are retrieved from the web

Step 3: Answer Finalization

  • A second prompt is built using:
    • The generated question
    • Its answer options
    • Tavily’s retrieved context
  • Llama 2 re-evaluates the question and confirms the correct answer based on verified information

✔️ The finalized, verified question is returned to the frontend as a JSON object.


🧰 Technology Stack

🎨 Frontend

  • HTML
  • JavaScript
  • SCSS / CSS

🧪 Backend

  • Python
  • Flask
  • LangChain

🧠 Language Model

  • Llama 2 (via Hugging Face transformers)

☁️ Cloud & Services

  • Vercel — Serverless APIs (authentication & service discovery)
  • Upstash — Managed Redis (URLs, sessions, user data)
  • ngrok — Secure tunneling for local backend exposure
  • Tavily — Real-time search API for factual verification

🔐 Authentication

  • Node.js
  • TypeScript
  • bcrypt.js
  • ioredis

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors