ETAPS 2019
Sat 6 - Thu 11 April 2019 Prague, Czech Republic
Thu 11 Apr 2019 17:30 - 18:00 at JUPITER - Software Testing Chair(s): Silvia Lizeth Tapia Tarifa

Abstract. Various approaches to software analysis, including test input generation techniques and software model checkers, require engineers to (manually) identify a subset of a module’s methods in order to drive the software analysis, or the test input generation. More precisely, given a module to be analyzed, engineers typically select a subset of its methods to be considered as object builders, to define a so-called driver, that will be used to automatically build objects for analysis, e.g., combining them nondeterministically, randomly, etc. Of course, this selection of relevant methods must be done by a careful inspection of the module and its API, since both the relative exhaustiveness of the analysis (leaving important methods out may systematically avoid generating objects for analysis), as well as its efficiency (the different bounded combinations of methods of an API grows exponentially as the number of methods increases), are affected by the selection. In this paper, we propose an approach for automatically selecting a set of builders, from a module’s API. Our approach is based on an evolutionary algorithm that favors sets of methods whose combinations lead to producing larger sets of objects. Moreover, the algorithm also takes into account other characteristics of these sets of methods, trying to prioritize the selection of methods with less and simpler parameters. Since the implementation of the described evolutionary mechanism requires in principle handling and comparing large sets of objects, and this grows very quickly both in terms of space and running times, we employ an abstraction of sets of objects, called field extensions, that involves using the field values of the objects in the set instead of the actual objects, and enables us to effectively implement our mechanism. We experimentally evaluate our approach on a benchmark of stateful Java classes, and show that our automatically built sets of methods are sufficient (i.e., do not miss relevant methods) as well as minimal (do not contain superfluous methods), and can be computed reasonably efficiently.

Thu 11 Apr

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

16:30 - 18:00
Software TestingFASE at JUPITER
Chair(s): Silvia Lizeth Tapia Tarifa University of Oslo
16:30
30m
Talk
CoVeriTest: Cooperative, Verifier-Based Testing
FASE
Dirk Beyer LMU Munich, Marie-Christine Jakobs TU Darmstadt, Germany
Link to publication
17:00
30m
Talk
Pardis: Priority Aware Test Case Reduction
FASE
Golnaz Gharachorlu , Nick Sumner Simon Fraser University
Link to publication
17:30
30m
Talk
Automatically Identifying Sufficient Object Builders from Module APIs
FASE
Pablo Ponzio Dept. of Computer Science FCEFQyN, University of Rio Cuarto, Valeria Bengolea Dept. of Computer Science FCEFQyN, University of Rio Cuarto, Mariano Politano , Nazareno Aguirre Dept. of Computer Science FCEFQyN, University of Rio Cuarto, Marcelo F. Frias Dept. of Software Engineering Instituto Tecnológico de Buenos Aires
Link to publication