Automatic Programming Assessment System for Computer Science Bridge Course - An Experience Report
RWTH Aachen University in Germany offers a bridge course that introduces students of a variety of study programs to the basics of imperative programming. Due to the high number of students and limited availability of tutors, it is hard to provide instant individual feedback to all students, to notice how difficult the tasks are for the students, and to reliably monitor their progression during the course. This motivated us to use an Automatic Program Assessment System (APAS) to provide instant formative feedback to students and to systematically assess the course’s tasks.
In this paper, we present our study in which we investigated (1) if the use of our APAS influences the students’ perceived difficulty of the programming tasks, (2) whether the use of our APAS increases the students’ progression speed, and (3) if the number of automated assessments triggered by the students can serve as an indicator of a task’s perceived difficulty. The results did not allow us to identify any meaningful differences between the study and control group with regards to the perceived difficulty and the progression speed. We found that the number of automated assessments can serve as a rough indicator for the task’s perceived difficulty.
We also found initial indication that the use of the automated assessment helps to ensure that the students complete the tasks in full and as intended by the teachers and might improve code quality. This needs to be further investigated in future work.