Automatically generating test cases for software has been an active research topic for many years. While current tools can generate powerful regression or crash-reproducing test cases, these are often kept separately from the maintained test suite. In this paper, we leverage the developer’s familiarity with test cases amplified from existing, manually written developer tests. Starting from issues reported by developers in previous studies, we investigate what aspects are important to design a developer-centric test amplification approach, that provides test cases that are taken over by developers into their test suite. We conduct 16 semi-structured interviews with software developers supported by our prototypical designs of a developer-centric test amplification approach and a corresponding test exploration tool. We extend the test amplification tool DSpot, generating test cases that are easier to understand. Our IntelliJ plugin TestCube empowers developers to explore amplified test cases from their familiar environment. From our interviews, we gather 52 observations that we summarize into 23 result categories and give two key recommendations on how future tool designers can make their tools better suited for developer-centric test amplification.
Thu 18 MayDisplayed time zone: Hobart change
13:45 - 15:15 | Test quality and improvementTechnical Track / Journal-First Papers / DEMO - Demonstrations at Meeting Room 110 Chair(s): Guowei Yang University of Queensland | ||
13:45 15mTalk | Test Selection for Unified Regression Testing Technical Track Shuai Wang University of Illinois at Urbana-Champaign, Xinyu Lian University of Illinois at Urbana-Champaign, Darko Marinov University of Illinois at Urbana-Champaign, Tianyin Xu University of Illinois at Urbana-Champaign Pre-print | ||
14:00 15mTalk | ATM: Black-box Test Case Minimization based on Test Code Similarity and Evolutionary Search Technical Track Rongqi Pan University of Ottawa, Taher A Ghaleb University of Ottawa, Lionel Briand University of Luxembourg; University of Ottawa | ||
14:15 15mTalk | Measuring and Mitigating Gaps in Structural Testing Technical Track Soneya Binta Hossain University of Virginia, Matthew B Dwyer University of Virginia, Sebastian Elbaum University of Virginia, Anh Nguyen-Tuong University of Virginia Pre-print | ||
14:30 7mTalk | FlaPy: Mining Flaky Python Tests at Scale DEMO - Demonstrations Pre-print | ||
14:37 7mTalk | Scalable and Accurate Test Case Prioritization in Continuous Integration Contexts Journal-First Papers Ahmadreza Saboor Yaraghi University of Ottawa, Mojtaba Bagherzadeh University of Ottawa, Nafiseh Kahani University of Carlton, Lionel Briand University of Luxembourg; University of Ottawa | ||
14:45 7mTalk | Flakify: A Black-Box, Language Model-based Predictor for Flaky Tests Journal-First Papers Sakina Fatima University of Ottawa, Taher A Ghaleb University of Ottawa, Lionel Briand University of Luxembourg; University of Ottawa | ||
14:52 7mTalk | Developer-centric test amplification Journal-First Papers Pre-print | ||
15:00 7mTalk | How Developers Engineer Test Cases: An Observational Study Journal-First Papers MaurÃcio Aniche Delft University of Technology, Christoph Treude University of Melbourne, Andy Zaidman Delft University of Technology Pre-print |