MockSniffer: Characterizing and Recommending Mocking Decisions for Unit Tests
In unit testing, mocking is popularly used to ease test effort, reduce test flakiness, and increase test coverage by replacing the actual dependencies with simple implementations. However, there are no clear criteria to determine which dependencies in a unit test should be mocked. Inappropriate mocking can have undesirable consequences: under-mocking could result in the inability to isolate the class under test (CUT) from its dependencies while over-mocking increases the developers’ burden on maintaining the mocked objects and may lead to spurious test failures. According to existing work, various factors can determine whether a dependency should be mocked. As a result, mocking decisions are often difficult to make in practice. Studies on the evolution of mocked objects also showed that developers tend to change their mocking decisions: 17% of the studied mocked objects were introduced sometime after the test scripts were created and another 13% of the originally mocked objects eventually became unmocked. In this work, we are motivated to develop an automated technique to make mocking recommendations to facilitate unit testing. We studied 10,846 test scripts in four actively maintained open-source projects that use mocked objects, aiming to characterize the dependencies that are mocked in unit testing. Based on our observations on mocking practices, we designed and implemented a tool, MockSniffer, to identify and recommend mocks for unit tests. The tool is fully automated and requires only the CUT and its dependencies as input. It leverages machine learning techniques to make mocking recommendations by holistically considering multiple factors that can affect developers’ mocking decisions. Our evaluation of Mock- Sniffer on ten open-source projects showed that it outperformed three baseline approaches, and achieved good performance in two potential application scenarios.
Wed 23 SepDisplayed time zone: (UTC) Coordinated Universal Time change
00:00 - 01:00 | Testing (1)Research Papers / Tool Demonstrations at Wombat Chair(s): Lingming Zhang University of Illinois at Urbana-Champaign, USA | ||
00:00 20mTalk | MockSniffer: Characterizing and Recommending Mocking Decisions for Unit Tests Research Papers Hengcheng Zhu Southern University of Science and Technology, Lili Wei The Hong Kong University of Science and Technology, Ming Wen Huazhong University of Science and Technology, China, Yepang Liu Southern University of Science and Technology, Shing-Chi Cheung Hong Kong University of Science and Technology, China, Qin Sheng WeBank Co Ltd, Cui Zhou WeBank Co Ltd DOI Pre-print | ||
00:20 20mTalk | Defect Prediction Guided Search-Based Software Testing Research Papers Anjana Perera Monash University, Aldeida Aleti Monash University, Marcel Böhme Monash University, Australia, Burak Turhan Monash University DOI Pre-print | ||
00:40 10mTalk | STIFA: Crowdsourced Mobile Testing Report Selection Based on Text and Image Fusion Analysis Tool Demonstrations Zhenfei Cao Nanjing University, Xu Wang Nanjing University, Shengcheng Yu Nanjing University, China, Yexiao Yun Nanjing University, Chunrong Fang Nanjing University, China |