Students often tackle programming problems with a flawed understanding. One classic intervention is to ask them to hone their understanding by constructing examples first. Yet, this can reinforce misunderstandings: if a misunderstanding is consistently represented in both one’s tests and implementation, running tests won’t reveal issues. In this talk, I show how to provide students with timely, actionable feedback about their problem understanding long before they start coding. To do so, instructors construct hidden correct and incorrect implementations, which the students’ IDEs then use to assess the validity and thoroughness of their tests. I implement this in Pyret and show that this feedback drastically improves the quality of test cases and, in some cases, implementations.
Program Display Configuration
Sun 11 Jul
Displayed time zone: Brussels, Copenhagen, Madrid, Parischange