DeepUIFuzz: A Guided Fuzzing Strategy for Testing UI Component Detection Models
Deep learning-based object detection models are widely used for user interface (UI) component identification. However, these models often make errors when they encounter UI configurations different from their training data. Existing methods generate test cases seeding from original test dataset and using manually defined metamorphic relations (MR). To eliminate the dependency on original test data and manually defined MRs, this study proposes DeepUIFuzz, a guided fuzzing methodology that leverages Google’s Material Design layouts as seed templates. Our approach systematically explores UI design space by fuzzing four key style dimensions: color, elevation, fonts, and shape, while maintaining HTML diversity through varied icons and images. All generated layouts were validated as structurally consistent through HTML and CSS validation checks. Additionally, human evaluators assessed the realism of the generated layouts, confirming their usability. Evaluation of the generated test cases against four prominent object detection models (YOLOv3, YOLOv5, SSD, FCOS) demonstrates high Error Finding Rates ranging from 0.86 to 0.95.
Mon 28 AprDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | |||
11:00 60mKeynote | Keynote by Marcel Böhme SBFT Marcel Böhme MPI for Security and Privacy | ||
12:00 15mResearch paper | DeepUIFuzz: A Guided Fuzzing Strategy for Testing UI Component Detection Models SBFT Proma Chowdhury University of Dhaka, Kazi Sakib Institute of Information Technology, University of Dhaka | ||
12:15 15mResearch paper | On Evaluating Fuzzers with Context-Sensitive Fuzzed Inputs: A Case Study on PKCS#1-v1.5 SBFT S Mahmudul Hasan Syracuse University, Polina Kozyreva Syracuse University, Endadul Hoque Syracuse University |