Automated Feedback Generation for Programming Assignments through Diversification
This program is tentative and subject to change.
Immediate and personalized feedback on students’ programming assignments is important for improving their programming skills. However, it is challenging for instructors to give personalized feedback to every student since each program is written differently. To address this problem, the Automated Feedback Generation (AFG) technique has been proposed, which identifies faults from wrong program, generates patches, and provides feedback if pass validation. AFG relies on correct programs from students to find faults and generate patches. Therefore, having diverse correct programs is important for the performance of AFG. However, in small-scale programming courses or new online judge problems, might be a lack of diversity in correct programs. In this paper, we propose Mentored, a new AFG for students’ programming assignments through diversification. Mentored generates new structures of programs through various combinations of programs to generate modifications optimized for wrong programs while solving the problem of dependency on correct programs. Additionally, Mentored provides transparent feedback on the process of repairing wrong programs. We evaluate Mentored on real student programming assignments and compare it with state-of-the-art AFG approaches. Our dataset includes real university introductory programming assignments and online judge problems. Experimental results show that Mentored generates higher repair rates and more diverse program structures than other AFG approaches. Moreover, by providing a transparent sequence of repair processes, Mentored is expected to improve students’ programming skills and reduce instructors’ manual effort in feedback generation. These results indicate that Mentored can be a useful tool in programming education.
This program is tentative and subject to change.
Tue 29 AprDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | |||
11:00 20mTalk | Mitigating Obfuscation Attacks on Software Plagiarism Detectors via Subsequence Merging CSEE&T Timur Sağlam Karlsruhe Institute of Technology (KIT), Nils Niehues Karlsruhe Institute of Technology (KIT), Sebastian Hahner Karlsruhe Institute of Technology (KIT), Larissa Schmid Karlsruhe Institute of Technology Pre-print | ||
11:20 20mTalk | SOBO: A Feedback Bot to Nudge Code Quality in Programming Courses CSEE&T Sofia Bobadilla KTH Royal Institute of Technology, Sweden, Richard Glassey KTH: Royal Institute of Technology, Alexandre Bergel University of Chile, Martin Monperrus KTH Royal Institute of Technology | ||
11:40 20mTalk | Automated Feedback Generation for Programming Assignments through Diversification CSEE&T | ||
12:00 20mTalk | Exploring how students test models in Model-Driven Engineering CSEE&T Felix Cammaerts KU Leuven, Beatriz Marín Universitat Politècnica de València, Monique Snoeck Katholieke Universiteit Leuven |