Evidence Profiles for Validity Threats in Program Comprehension Experiments
Searching for clues, gathering evidence, and reviewing case files are all techniques used by criminal investigators to draw sound conclusions and avoid wrongful convictions. Similarly, medicine has a long tradition of evidence-based practice, in which administering a treatment without evidence of its efficacy is considered malpractice. In software engineering (SE), study designs that are based on evidence enable sound methodologies, including the mitigation of validity threats. The SE body of knowledge is, however, missing out on evidence of validity threats.
Echoing a recent call for evaluation of design decisions in program comprehension experiments, we conducted a 2-phases study consisting of systematic literature searches, snowballing, and thematic synthesis. We found out (1) which validity threat categories are most often discussed in primary studies of code comprehension, and (2) what are the evidence profiles for the three most commonly reported threats to validity.
We discovered that few mentions of validity threats in primary studies (31 of 409) were supported by referenced evidence. For the three most common threats, i.e., influences of programming experience, program length, and selected comprehension measures, almost all cited studies (17 of 18) did not meet our criteria for evidence. We show that for many threats to validity that are currently assumed to be influential across all studies, their actual impact may depend very much on the design and context of each specific study.
Researchers should discuss threats to validity within the context of their particular study and support their discussions with evidence. The present paper can be one resource for evidence, and we call for more meta-studies of this type to be conducted, which will then inform design decisions in primary studies. Further, although we have applied our methodology in the context of program comprehension, our approach can also be used in other SE research areas to enable evidence-based experiment design decisions and meaningful discussions of threats to validity.
Fri 19 MayDisplayed time zone: Hobart change
11:00 - 12:30 | Program comprehensionTechnical Track / Journal-First Papers at Meeting Room 103 Chair(s): Oscar Chaparro College of William and Mary | ||
11:00 15mTalk | Code Comprehension Confounders: A Study of Intelligence and Personality Journal-First Papers Link to publication Pre-print | ||
11:15 15mTalk | Identifying Key Classes for Initial Software Comprehension: Can We Do It Better? Technical Track Weifeng Pan Zhejiang Gongshang University, China, Xin Du Zhejiang Gongshang University, China, Hua Ming Oakland University, Dae-Kyoo Kim Oakland University, Zijiang Yang Xi'an Jiaotong University and GuardStrike Inc | ||
11:30 15mTalk | Improving API Knowledge Discovery with ML: A Case Study of Comparable API Methods Technical Track Daye Nam Carnegie Mellon University, Brad A. Myers Carnegie Mellon University, Bogdan Vasilescu Carnegie Mellon University, Vincent J. Hellendoorn Carnegie Mellon University Pre-print | ||
11:45 15mTalk | Evidence Profiles for Validity Threats in Program Comprehension Experiments Technical Track Marvin Muñoz Barón University of Stuttgart, Marvin Wyrich Saarland University, Daniel Graziotin University of Stuttgart, Stefan Wagner University of Stuttgart Pre-print | ||
12:00 15mTalk | Developers’ Visuo-spatial Mental Model and Program Comprehension Technical Track Pre-print | ||
12:15 15mTalk | Two Sides of the Same Coin: Exploiting the Impact of Identifiers in Neural Code Comprehension Technical Track Shuzheng Gao Harbin institute of technology, Cuiyun Gao Harbin Institute of Technology, Chaozheng Wang Harbin Institute of Technology, Jun Sun Singapore Management University, David Lo Singapore Management University, Yue Yu College of Computer, National University of Defense Technology, Changsha 410073, China |