Plug the Database & Play With Automatic Testing: Improving System Testing by Exploiting Persistent Data
A key challenge in automatic Web testing is the generation of syntactically and semantically valid input values that can exercise the many functionalities that impose constraints on the validity of the inputs. Existing test case generation techniques either rely on manually curated catalogs of values, or extract values from external data sources, such as the Web or publicly available knowledge bases. Unfortunately, relying on manual effort is generally too expensive for most practical applications, while domain-specific and application-specific data can be hardly found either on the Web or in general purpose knowledge bases. This paper proposes DBInputs, a novel approach that reuses the data from the database of the target Web applications, to automatically identify domain-specific and application-specific inputs, and effectively fulfil the validity constraints present in the tested Web pages. DBInputs can properly cope with system testing and maintenance testing efforts, since databases are naturally and inexpensively available in those phases. To extract valid inputs from the application databases, DBInputs exploits the syntactic and semantic similarity between the identifiers of the input fields and the ones in the tables of the database, automatically resolving the mismatch between the user interface and the schema of the database. Our experiments provide initial evidence that DBInputs can outperform both random input selection and LINK, a state-of-the-art approach for searching inputs from knowledge bases.