Despite the recent advances in test generation, fully automatic software testing remains a dream: Ultimately, any generated test input depends on a test oracle that determines correctness, and, except for generic properties such as “the program shall not crash”, such oracles require human input in one form or another. CrowdSourcing is a recently popular technique to automate computations that cannot be performed by machines, but only by humans. A problem is split into small chunks, that are then solved by a crowd of users on the Internet. In this paper we investigate whether it is possible to exploit CrowdSourcing to solve the oracle problem: We produce tasks asking users to evaluate CrowdOracles - assertions that reflect the current behavior of the program. If the crowd determines that an assertion does not match the behavior described in the code documentation, then a bug has been found. Our experiments demonstrate that CrowdOracles are a viable solution to automate the oracle problem, yet taming the crowd to get useful results is a difficult task.
Tue 18 AprDisplayed time zone: Dublin change
16:00 - 17:30 | |||
16:00 45mTalk | CrowdOracles: Can the Crowd Solve the Oracle Problem? MIPs Fabrizio Pastore University of Luxembourg, Leonardo Mariani University of Milano-Bicocca, Gordon Fraser University of Passau DOI | ||
16:45 45mTalk | ACTS: A Combinatorial Test Generation Tool MIPs Linbin Yu , Yu Lei , Raghu Kacker National Institute of Standards and Technology, Richard Kuhn National Institute of Standards and Technology DOI |