Evidence Profiles for Validity Threats in Program Comprehension Experiments
Searching for clues, gathering evidence, and reviewing case files are all techniques used by criminal investigators to draw sound conclusions and avoid wrongful convictions. Similarly, medicine has a long tradition of evidence-based practice, in which administering a treatment without evidence of its efficacy is considered malpractice. In software engineering (SE), study designs that are based on evidence enable sound methodologies, including the mitigation of validity threats. The SE body of knowledge is, however, missing out on evidence of validity threats.
Echoing a recent call for evaluation of design decisions in program comprehension experiments, we conducted a 2-phases study consisting of systematic literature searches, snowballing, and thematic synthesis. We found out (1) which validity threat categories are most often discussed in primary studies of code comprehension, and (2) what are the evidence profiles for the three most commonly reported threats to validity.
We discovered that few mentions of validity threats in primary studies (31 of 409) were supported by referenced evidence. For the three most common threats, i.e., influences of programming experience, program length, and selected comprehension measures, almost all cited studies (17 of 18) did not meet our criteria for evidence. We show that for many threats to validity that are currently assumed to be influential across all studies, their actual impact may depend very much on the design and context of each specific study.
Researchers should discuss threats to validity within the context of their particular study and support their discussions with evidence. The present paper can be one resource for evidence, and we call for more meta-studies of this type to be conducted, which will then inform design decisions in primary studies. Further, although we have applied our methodology in the context of program comprehension, our approach can also be used in other SE research areas to enable evidence-based experiment design decisions and meaningful discussions of threats to validity.