This paper analyzes Large Language Models (LLMs) with regard to their programming exercise generation capabilities. Through a survey study, we defined the state of the art, extracted their strengths and weaknesses and finally derived an evaluation matrix, helping researchers and educators to decide which LLM is the best fitting for programming exercise generation. The findings reveal that multiple LLMs are capable of producing useful programming exercises. Nevertheless, there exist several challenges like the ease with which LLMs might solve exercises generated by LLMs and their limited creative capacity. The proposed evaluation matrix offers a structured approach to assessing LLMs and contains three main factors: (1) A general assessment, (2) Program Analysis and (3) Qualitative Assessment. This paper contributes to the ongoing discourse on the integration of AI in education, offering insights into the capabilities and limitations of LLMs in improving programming education.
Niklas Meissner Institute of Software Engineering, University of Stuttgart, Sandro Speth Institute of Software Engineering, University of Stuttgart, Steffen Becker University of Stuttgart