How Developers Engineer Test Cases: An Observational Study
One of the main challenges that developers face when testing their systems lies in engineering test cases that are good enough to reveal bugs. And while our body of knowledge on software testing and automated test case generation is already quite significant, in practice, developers are still the ones responsible for engineering test cases manually. Therefore, understanding the developers’ thought- and decision-making processes while engineering test cases is a fundamental step in making developers better at testing software. In this paper, we observe 13 developers thinking-aloud while testing different real-world open-source methods, and use these observations to explain how developers engineer test cases. We then challenge and augment our main findings by surveying 72 software developers on their testing practices. We discuss our results from three different angles. First, we propose a general framework that explains how developers reason about testing. Second, we propose and describe in detail the three different overarching strategies that developers apply when testing. Third, we compare and relate our observations with the existing body of knowledge and propose future studies that would advance our knowledge on the topic.
Thu 18 MayDisplayed time zone: Hobart change
13:45 - 15:15 | Test quality and improvementTechnical Track / Journal-First Papers / DEMO - Demonstrations at Meeting Room 110 Chair(s): Guowei Yang University of Queensland | ||
13:45 15mTalk | Test Selection for Unified Regression Testing Technical Track Shuai Wang University of Illinois at Urbana-Champaign, Xinyu Lian University of Illinois at Urbana-Champaign, Darko Marinov University of Illinois at Urbana-Champaign, Tianyin Xu University of Illinois at Urbana-Champaign Pre-print | ||
14:00 15mTalk | ATM: Black-box Test Case Minimization based on Test Code Similarity and Evolutionary Search Technical Track Rongqi Pan University of Ottawa, Taher A Ghaleb University of Ottawa, Lionel Briand University of Luxembourg; University of Ottawa | ||
14:15 15mTalk | Measuring and Mitigating Gaps in Structural Testing Technical Track Soneya Binta Hossain University of Virginia, Matthew B Dwyer University of Virginia, Sebastian Elbaum University of Virginia, Anh Nguyen-Tuong University of Virginia Pre-print | ||
14:30 7mTalk | FlaPy: Mining Flaky Python Tests at Scale DEMO - Demonstrations Pre-print | ||
14:37 7mTalk | Scalable and Accurate Test Case Prioritization in Continuous Integration Contexts Journal-First Papers Ahmadreza Saboor Yaraghi University of Ottawa, Mojtaba Bagherzadeh University of Ottawa, Nafiseh Kahani University of Carlton, Lionel Briand University of Luxembourg; University of Ottawa | ||
14:45 7mTalk | Flakify: A Black-Box, Language Model-based Predictor for Flaky Tests Journal-First Papers Sakina Fatima University of Ottawa, Taher A Ghaleb University of Ottawa, Lionel Briand University of Luxembourg; University of Ottawa | ||
14:52 7mTalk | Developer-centric test amplification Journal-First Papers Pre-print | ||
15:00 7mTalk | How Developers Engineer Test Cases: An Observational Study Journal-First Papers Maurício Aniche Delft University of Technology, Christoph Treude University of Melbourne, Andy Zaidman Delft University of Technology Pre-print |