CSEE&T 2024
Mon 29 July - Thu 1 August 2024 Würzburg, Germany
Tue 30 Jul 2024 16:00 - 16:20 at Room 2 - State of Research Chair(s): Andreas Bollin

``The Code Mangler: Evaluating Coding Ability Without Writing Any Code'' [1] introduced a hypothetical bad actor into an established, but still relatively new, problem type - the Parsons’ Problem [2]. When I replicated and extended the study, similar results were found: students found that rearranging lines of correct code that were in jumbled order was as difficult and as good at assessing their abilities as writing code from scratch [3]. Not only is this finding helpful in providing an accurate assessment tool with significantly reduced effort required to grade, it presents an interesting possibility for future software engineers: the ability to practice and gain confidence in working with unfamiliar code, a skill which may translate to industry.

Like the original paper, there was a difference between how long it takes to grade Code Mangler problems vs. traditional code-writing problems: 2.8-4.4 times faster grading for Code Mangler problems. Likewise, students who did well on the entire exam did well on the Code Mangler problem, with medium to strong correlation at high statistical significance (Spearman’s Rank Correlation Coefficient (r_s = .3108, p < .001 or 1e-3)). By asking students for their perception of the experience, we gain additional insight to the impact of these questions. The questions, asked once per problem variant, and response options were as follows:

  • What level of difficulty was the <traditional/mangled> problem? Extremely difficult, Very difficult, Reasonably difficult, Somewhat difficult, or Not difficult.

  • How well did the format of the <traditional/mangled> problem assess your abilities? Perfectly assessed my abilities, Very well, Reasonably well, Somewhat well, or Unable to assess my abilities.

  • How confident are you in your answer to the <traditional/mangled> problem? Certain, Very confident, Reasonably confident, Somewhat confident, or Not confident at all.

In all three instances - difficulty, ability to assess skills, and confidence in the answer, not only was the equal'' answer the clear winner, but those answers which did favor one variant over the other were balanced. The only statistically significant difference between the variants was with regard to difficulty. Students rated the difficulty from1 - Extremely difficult'' to 5 - Not difficult.'' The results show that traditional questions (M = 3.38, SD = .89) were considered less difficult than Code Mangler questions (M=3.05, SD=.92), t(150) = 2.23, p = .027. While this meets the lowest bar of statistical significance typically used (p < .05), it has been described as beingsuggestive'' rather than ``significant'' [4].

While the results do provide additional evidence that these problems can be an effective alternative to code-writing exercises, the qualitative results revealed an additional point of interest to software engineering educators: comments which indicate a lack of preparation for industry: The code mangler seems easier in terms of syntax errors, but understanding the logic of who wrote the program can be harder than writing your own version'' andIt felt harder than the writing your own code question…it didn’t test my knowledge of writing code but tested how well I could read others and fix it.''

Originally, Parsons’ Problems were intended to provide ``rote learning of syntactic constructs'' [2]. This learning through repetition could benefit new grads entering industry. Repetitive practice with Parsons’ Problems could increase the learner’s ability to understand unfamiliar code. An increase may then reduce time-to-productivity in the field when joining a new team as a new developer or as an experienced developer joining a new team. Our future work includes the development of a tool to more easily generate a large number of variations of each Parsons’ Problems which can be imported directly into LMS systems, like Canvas or Moodle. This would allow rote learning, even on code not written by the instructor, with minimal effort required to create the problems.

Tue 30 Jul

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

15:20 - 16:20
State of ResearchResearch Track at Room 2
Chair(s): Andreas Bollin University of Klagenfurt, Austria
15:20
20m
Talk
Thriving in the era of hybrid work: Raising cybersecurity awareness using serious games in industry trainings
Research Track
Tiange Zhao , Tiago Espinha Gasiba Siemens AG, Ulrike Lechner Universität der Bundeswehr München, Maria Pinto-Albuquerque Instituto Universitário de Lisboa (ISCTE-IUL)
15:40
20m
Talk
Towards understanding students’ sensemaking of test case design: a one-page summary
Research Track
Niels Doorn Open Universiteit and NHL Stenden University of Applied Sciences, Tanja E. J. Vos Universitat Politècnica de València and Open Universiteit, Beatriz Marín Universitat Politècnica de València
16:00
20m
Talk
Extension of a study on Parsons' Problems reveals implications for SEE&T
Research Track
Kevin Wendt University of Minnesota

Information for Participants
Tue 30 Jul 2024 15:20 - 16:20 at Room 2 - State of Research Chair(s): Andreas Bollin
Info for room Room 2:

Enter the building and take the main stairs or elevator to the top floor.