Clause-Driven Automated Grading of SQL’s DDL and DML Statements

by Benard Wanjiru, Patrick van Bommel and Djoerd Hiemstra

Automated grading systems for SQL courses can significantly reduce instructor workload while ensuring consistency and objectivity in assessment. At our university, an automated SQL grading tool has become essential for evaluating assignments. Initially, we focused on grading Data Query Language (SELECT) statements, which constitute the core content of assignments in our first-year computer science course. SELECT statements produce a results table, which makes automatic grading relatively easy. However, other SQL statements, such as CREATE TABLE, INSERT, DELETE, UPDATE, do not produce a results table. This makes grading these statements more difficult. Recognizing the need to cover broader course material, we have extended our system to evaluate advanced Data Definition Language (DDL) and Data Manipulation Language (DML) statements. In this paper, we describe our approach to automated DDL/DML grading and illustrate our method of clause-driven tailored feedback generation. We explain how our system generates precise, targeted feedback based on specific SQL clauses or components. In addition, we present a practical example to highlight the benefits of our approach. Finally, we benchmark our grading tool against existing systems. Our extended tool can parse and provide feedback on most student SQL submissions. It can consistently provide targeted feedback, generating nearly one suggestion per error. It generates shorter feedback for simpler DML queries, while more complex syntax leads to longer feedback. It has the ability to pinpoint precise SQL errors. Lastly, it can generate precise and actionable suggestions, with each message directly tied to the specific component that caused the error.

To be presented at the SIGCSE Technical Symposium on Computer Science Education (SIGCSE 2026) on 18-21 February 2026 in St. Louis, United States of America.

Sensitivity of Automated SQL Grading in Computer Science Courses

by Benard Wanjiru, Patrick van Bommel, and Djoerd Hiemstra

Previous research has primarily relied on fixed procedures when implementing partial grading systems. As a result, the sensitivity of such systems in terms of error analysis becomes inflexible as well. In this paper, we employ a software correctness model that allows for a dynamic and flexible approach for adjusting the sensitivity of a grading system based on the user’s needs and goals. We show how partial grading can be used to award fair grades and also categorize students into groups based on their strengths and weaknesses observed in their answers. Furthermore, we show how the sensitivity of a grading system can be varied to allow such grouping. To illustrate this, we analysed more than 2000 answers for 6 SQL programming assignments. An implication of this study is that instructors can carry out more effective partial grading of SQL queries as well as adjust learning material based on the needs of a particular group of students. They can address the observed limitations, thereby bridging the gap between high-performing students and those that require additional attention.

To be presented at the third International Conference on Innovations in Computing Research (ICR) on August 12–14, 2024 in Athens, Greece.

[download pdf]

Towards a Generic Model for Classifying Software into Correctness Levels and its Application to SQL

by Benard Wanjiru, Patrick van Bommel, and Djoerd Hiemstra

Automated grading systems can save a lot of time when carrying our grading of software exercises. In this paper, we present our ongoing work on a generic model for generating software correctness levels. These correctness levels enable partial grades of students’ software exercises. The generic model can be used as a foundation for correctness of SQL queries and can be generalized to different programming languages.

To be presented at the SEENG 2023 Workshop on Software Engineering for the Next Generation of the 45th International Conference on Software Engineering on Tuesday 16 May in Melbourne, Australia.

[download pdf]