When generating tests for graphical user interfaces, one central problem is to identify how individual UI elements can be interacted with—clicking, long- or right-clicking, swiping, dragging, typing, or more. We present an approach based on reinforcement learning that automatically learns which interactions can be used for which elements, and uses this information to guide test generation. We model the problem as an instance of the multi-armed bandit problem (MAB problem) from probability theory, and show how its traditional solutions work on test generation, with and without relying on previous knowledge. The resulting guidance yields higher coverage. In our evaluation, our approach shows improvements in statement coverage between 18% (when not using any previous knowledge) and 20% (when reusing previously generated models).
Fri 19 Jul Times are displayed in time zone: Beijing, Chongqing, Hong Kong, Urumqi change
14:00 - 14:22 Talk | TestMig: Migrating GUI Test Cases from iOS to Android Technical Papers Xue Qin, Hao ZhongShanghai Jiao Tong University, Xiaoyin WangUniversity of Texas at San Antonio, USA | ||
14:22 - 14:45 Talk | Learning User Interface Element Interactions Technical Papers Christian DegottCISPA Helmholtz Center for Information Security, Nataniel Borges Jr.CISPA Helmholtz Center for Information Security, Andreas ZellerCISPA Helmholtz Center for Information Security Pre-print Media Attached | ||
14:45 - 15:07 Talk | Improving Random GUI Testing with Image-based Widget Detection Technical Papers Thomas D. WhiteThe University of Sheffield, Gordon FraserUniversity of Passau, Guy J. BrownThe University of Sheffield | ||
15:07 - 15:30 Talk | Automatically Testing Self-Driving Cars with Search-based Procedural Content Generation Technical Papers |