Automatic Assessment of Students' Software Models Using a Simple Heuristic and Machine Learning
Software models are increasingly popular. To educate the next generation of software engineers, it is important that they learn how to model software systems well, so that they can design them effectively in industry. It is also important that instructors have the tools that can help them assess students’ models more effectively. In this paper, we investigate how a tool that combines a simple heuristic with machine learning techniques can be used to help assess student submissions in model-driven engineering courses. We apply our proposed technique to first identify submissions of high quality and second to predict approximate letter grades. The results are comparable to human grading and a complex rule-based technique for the former and surprisingly accurate for the latter. Time of presentation: NA (we structure it as a pre-recorded video + discussion goes separately).
Tue 20 OctDisplayed time zone: Eastern Time (US & Canada) change
13:30 - 15:00
|Automatic Assessment of Students' Software Models Using a Simple Heuristic and Machine Learning|
|Towards a Better Understanding of Interactions with a Domain Modeling Assistant|
|From classic to agile: Experiences from more than a decade of project-based modeling education|
|On Teaching Descriptive and Prescriptive Modeling|