Empirical research aims to establish generalizable claims from data. Such claims may involve concepts that must be measured indirectly by using indicators. Construct validity is concerned with whether one can justifiably make claims at the conceptual level that are supported by results at the operational level. We report a quantitative analysis of the awareness of construct validity in the software engineering literature between 2000 and 2019 and a qualitative review of 83 articles about human-centric experiments published in five high-quality journals between 2015 and 2019. Over the two decades, the appearance in the literature of the term construct validity increased sevenfold. Some of the reviewed articles we reviewed employed various ways to ensure that the indicators span the concept in an unbiased manner. We also found articles that reuse formerly validated constructs. However, the articles disagree about how to define construct validity. Several interpret construct validity excessively by including threats to internal, external, or statistical conclusion validity. A few articles also include fundamental challenges of a study, such as cheating and misunderstanding of experiment material. The diversity of topics included as threats to construct validity calls for a more minimalist approach. Based on the review, we propose seven guidelines to improve how construct validity is handled and reported in software engineering.
Construct-Validity-in-Software-Engineering_Dag-Sjoberg.pdf (Construct_Validity.IDoESE 2022.Dag.Sjoberg.pdf) | 2.16MiB |
Wed 21 SepDisplayed time zone: Athens change
09:00 - 10:00 | Opening SessionIDoESE Doctoral Symposium at Sonck Chair(s): Maria Paasivaara LUT University, Finland & Aalto University, Finland | ||
09:00 20mOther | Introductions IDoESE Doctoral Symposium | ||
09:20 40mKeynote | Construct Validity in Software Engineering IDoESE Doctoral Symposium Pre-print File Attached |