ETAPS 2019
Sat 6 - Thu 11 April 2019 Prague, Czech Republic
Thu 11 Apr 2019 17:30 - 18:00 at JUPITER - Software Testing Chair(s): Silvia Lizeth Tapia Tarifa

Abstract. Various approaches to software analysis, including test input generation techniques and software model checkers, require engineers to (manually) identify a subset of a module’s methods in order to drive the software analysis, or the test input generation. More precisely, given a module to be analyzed, engineers typically select a subset of its methods to be considered as object builders, to define a so-called driver, that will be used to automatically build objects for analysis, e.g., combining them nondeterministically, randomly, etc. Of course, this selection of relevant methods must be done by a careful inspection of the module and its API, since both the relative exhaustiveness of the analysis (leaving important methods out may systematically avoid generating objects for analysis), as well as its efficiency (the different bounded combinations of methods of an API grows exponentially as the number of methods increases), are affected by the selection. In this paper, we propose an approach for automatically selecting a set of builders, from a module’s API. Our approach is based on an evolutionary algorithm that favors sets of methods whose combinations lead to producing larger sets of objects. Moreover, the algorithm also takes into account other characteristics of these sets of methods, trying to prioritize the selection of methods with less and simpler parameters. Since the implementation of the described evolutionary mechanism requires in principle handling and comparing large sets of objects, and this grows very quickly both in terms of space and running times, we employ an abstraction of sets of objects, called field extensions, that involves using the field values of the objects in the set instead of the actual objects, and enables us to effectively implement our mechanism. We experimentally evaluate our approach on a benchmark of stateful Java classes, and show that our automatically built sets of methods are sufficient (i.e., do not miss relevant methods) as well as minimal (do not contain superfluous methods), and can be computed reasonably efficiently.

Thu 11 Apr

16:30 - 18:00: FASE 2019 - Software Testing at JUPITER
Chair(s): Silvia Lizeth Tapia TarifaUniversity of Oslo
fase-2019-papers16:30 - 17:00
Dirk BeyerLMU Munich, Marie-Christine JakobsTU Darmstadt, Germany
Link to publication
fase-2019-papers17:00 - 17:30
Golnaz Gharachorlu, Nick SumnerSimon Fraser University
Link to publication
fase-2019-papers17:30 - 18:00
Pablo PonzioDept. of Computer Science FCEFQyN, University of Rio Cuarto, Valeria BengoleaDept. of Computer Science FCEFQyN, University of Rio Cuarto, Mariano Politano, Nazareno AguirreDept. of Computer Science FCEFQyN, University of Rio Cuarto, Marcelo F. FriasDept. of Software Engineering Instituto Tecnológico de Buenos Aires
Link to publication