When generating tests for graphical user interfaces, one central problem is to identify how individual UI elements can be interacted with—clicking, long- or right-clicking, swiping, dragging, typing, or more. We present an approach based on reinforcement learning that automatically learns which interactions can be used for which elements, and uses this information to guide test generation. We model the problem as an instance of the multi-armed bandit problem (MAB problem) from probability theory, and show how its traditional solutions work on test generation, with and without relying on previous knowledge. The resulting guidance yields higher coverage. In our evaluation, our approach shows improvements in statement coverage between 18% (when not using any previous knowledge) and 20% (when reusing previously generated models).
Fri 19 JulDisplayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change
14:00 - 15:30 | |||
14:00 22mTalk | TestMig: Migrating GUI Test Cases from iOS to Android Technical Papers Xue Qin , Hao Zhong Shanghai Jiao Tong University, Xiaoyin Wang University of Texas at San Antonio, USA | ||
14:22 22mTalk | Learning User Interface Element Interactions Technical Papers Christian Degott CISPA Helmholtz Center for Information Security, Nataniel Borges Jr. CISPA Helmholtz Center for Information Security, Andreas Zeller CISPA Helmholtz Center for Information Security Pre-print Media Attached | ||
14:45 22mTalk | Improving Random GUI Testing with Image-based Widget Detection Technical Papers Thomas D. White The University of Sheffield, Gordon Fraser University of Passau, Guy J. Brown The University of Sheffield | ||
15:07 22mTalk | Automatically Testing Self-Driving Cars with Search-based Procedural Content Generation Technical Papers |