Question Craft is an AI-powered application that automatically generates high-quality multiple-choice questions. Given a sample question, the system uses Llama 2 to create a new question that mirrors the topic, style, and difficulty of the original—making it ideal for practice, assessment design, and educational tooling.
- 🎯 Style- and difficulty-aware question generation
- 🔍 Real-time factual verification using web search
- ⚡ Instant feedback on answer selection
- 🔐 Secure authentication and session management
- 🌐 Fully decoupled frontend–backend architecture
Question Craft combines a lightweight web frontend with a modular, AI-driven backend that handles generation, verification, and answer validation.
The frontend is built with standard web technologies (HTML, CSS, JavaScript) and focuses on clarity, speed, and ease of use.
-
Input
OnsampleInput.html, users submit:- Topic
- Question prompt
- Four answer choices
- Correct answer
-
Serialization
The input is packaged into a structured JSON object and stored temporarily in browser local storage. -
Backend Routing
After redirecting toquestionPage.html, the frontend:- Calls a Vercel API endpoint (
/api/getLinks) - Retrieves the active backend URL from Upstash Redis
- Sends the question JSON to the backend for processing
- Calls a Vercel API endpoint (
-
Interaction & Feedback
A loading screen is displayed during generation. Once complete, users can answer the new question and receive immediate correctness feedback.
The backend runs in a Python environment and is orchestrated via a Jupyter Notebook
(langchain-experimentation-2-main.ipynb) that launches a Flask server.
- 🌍 ngrok exposes the local Flask server to the public internet
- 🧭 On startup, the server registers its ngrok URL in Upstash (Redis) for service discovery
- 🔐 Authentication is handled via Vercel serverless functions, using:
bcryptjsfor password hashing- Cookies for session management
- Redis for persistent user data
The backend uses a multi-step LangChain pipeline to ensure both creativity and correctness:
- A few-shot prompt is constructed using the user’s sample question
- The
meta-llama/Llama-2-13b-chat-hfmodel generates:- A new question
- Four answer choices
- A proposed correct answer
- The generated question is sent to the Tavily Search API
- Relevant real-world context and supporting facts are retrieved from the web
- A second prompt is built using:
- The generated question
- Its answer options
- Tavily’s retrieved context
- Llama 2 re-evaluates the question and confirms the correct answer based on verified information
✔️ The finalized, verified question is returned to the frontend as a JSON object.
- HTML
- JavaScript
- SCSS / CSS
- Python
- Flask
- LangChain
- Llama 2 (via Hugging Face
transformers)
- Vercel — Serverless APIs (authentication & service discovery)
- Upstash — Managed Redis (URLs, sessions, user data)
- ngrok — Secure tunneling for local backend exposure
- Tavily — Real-time search API for factual verification
- Node.js
- TypeScript
- bcrypt.js
- ioredis