Inspiration
We noticed a common problem in classrooms: students solve many problems but receive almost no personalized feedback. Teachers simply cannot write individual comments for every student on every question due to large class sizes and limited time. As a result, students only see right or wrong marks without understanding why they made a mistake or how to improve. Meanwhile, AI tools like ChatGPT are often used by students to get answers directly, which reduces critical thinking instead of supporting real learning. We wanted to build something different – not an answer generator, but a feedback assistant that helps students learn from their mistakes and helps teachers save time.
What it does
LDF is a lightweight AI system that takes one question and one student answer, then generates a complete diagnostic feedback report. The report tells the student whether their answer is correct or incorrect, identifies the key concept being tested, explains what the student did well and where they struggled, provides a step‑by‑step explanation tailored to the student’s actual answer, and gives one or two specific study recommendations. Teachers can review, edit, or override any AI‑generated feedback before sharing it with students. The system is designed for formative learning – helping students understand why they got something wrong and how to improve – not for high‑stakes grading.
How we built it
We built LDF using existing large language model APIs, specifically Alibaba Cloud Bailian with the Qwen model. The system does not require any training data, machine learning expertise, or complex infrastructure. The core logic is simple: the system presents a question to the student, the student types their answer into a basic interface, and the system calls the API to generate a structured feedback report. We used Python for the backend and created a command‑line version for quick prototyping. The entire system runs on a standard computer with internet access, and no heavy cloud deployment is needed.
Challenges we ran into
One challenge was ensuring that the AI generates feedback that is actually useful and not too generic. We solved this by carefully designing the prompt with clear instructions and examples. Another challenge was handling different types of student answers – some are numbers, some are expressions, and some are open‑ended text. We implemented a hybrid judgment approach that first tries exact matching for numeric answers and then falls back to the LLM for more complex cases. We also faced the limitation that we cannot detect whether a student copied their answer from another AI, but we decided to focus on the system's core value – providing feedback after the student answers – rather than trying to solve a problem beyond our scope.
Accomplishments that we're proud of
We built a working prototype that can generate a complete, personalized feedback report within seconds for any given question and student answer. The system is fully teacher‑controlled, meaning teachers can review and edit every piece of feedback before it reaches the student. We also kept the cost extremely low – approximately one to two cents per feedback report – making it affordable for any school. Most importantly, we created a practical, ethical, and transparent AI tool that does not replace teachers but supports them in providing better feedback to every student.
What we learned
We learned that building an AI education tool does not require massive datasets or complex models. Using existing LLM APIs with well‑designed prompts is enough to create something useful. We also learned the importance of being honest about what the system can and cannot do. We cannot detect cheating, we cannot guarantee perfect feedback every time, and we cannot replace teacher judgment – but we can provide a helpful assistant that saves time and offers personalized insights. Being transparent about these limitations actually makes the system more trustworthy.
What's next for LDF-Learning Diagnostic Feedback System
Next, we plan to build a simple web interface so that students and teachers can use the system more easily without typing commands. We also want to allow teachers to upload their own questions instead of relying on AI‑generated ones, giving them full control over the content. Another improvement is to add a simple dashboard where teachers can view all student reports in one place and edit them efficiently. Finally, we want to pilot the system in a real classroom with 10 to 20 students to collect feedback and measure whether it actually helps improve learning outcomes. If successful, we hope to make LDF available as a free or low‑cost tool for any teacher who wants to provide personalized feedback without spending hours writing it manually.
Built With
- gunicorn
- openai
- python
- qwen
Log in or sign up for Devpost to join the conversation.