Understanding Students' Knowledge of Programming Patterns Through Code Editing and Revising Tasks
How can we assess students’ knowledge of code structure to provide them with the appropriate support they need to write well-structured code and revise the structural issues of existing code? We explore this question through a survey study with 328 intermediate CS students. We examined students’ performance in code writing, revising their own and editing others’ poorly structured code. Our tasks targeted three pedagogically important control structure topics: \textit{Return Boolean expressions with operators}, \textit{Return Boolean expressions with method call} and \textit{Unique vs. repeated code within an \code{if} and \code{else}}.
The code writing section asked students to write short methods. Code editing asked them to edit the style of given code blocks, and the code revising section provided students (who wrote alternatively structured code), with progressive prompts and opportunities to revise their code. Survey also included a section that asked students to identify the code block with the normative style and code block that is most readable to them.
In contrast to the common practice of measuring students’ knowledge of code structure by counting anti-patterns in student code and considering them as knowledge gaps, we found that code writing alone is not an accurate measure for assessing students’ knowledge: More than 55% of students who wrote poorly on returning Boolean topics could successfully revise their code without requiring guidance on how to do so. Also, more than 25% of the students who wrote poorly on these topics could properly edit the given code blocks. Therefore, in many cases, students’ alternatively structured code should not be associated with knowledge gaps regarding the target structure.
Using logistic regressions, we also examined how well code writing can predict students’ success in code editing. %what extent students’ success in writing well-structured code can predict their success in editing others’ and revising their own code. We found that for all three topics, students who wrote well were more likely to edit others’ code correctly. However, writing well could not justify more than a slight variation of the model. Other assessment items can complement code-writing tasks. In particular, students’ correct selection of normative style and readability were both weakly predictive of editing success. However, only selection of normative style as most readable was predictive of code writing, and none were reliable predictors of code revising.