GUIDE: LLM-Driven GUI Generation Decomposition for Automated Prototyping
This program is tentative and subject to change.
Graphical user interface (GUI) prototyping serves as one of the most valuable techniques for enhancing the elicitation of requirements, facilitating the visualization and refinement of customer needs and closely integrating the customer into the development activities. While GUI prototyping has a positive impact on the software development process, it simultaneously demands significant effort and resources. The emergence of Large Language Models (LLMs) with their impressive code generation capabilities offers a promising approach for automating GUI prototyping. Despite their potential, there is a gap between current LLM-based prototyping solutions and traditional user-based GUI prototyping approaches which provide visual representations of the GUI prototypes and direct editing functionality. In contrast, LLMs merely produce text sequences or non-editable image outputs, which lacks both mentioned aspects and therefore impede supporting GUI prototyping. Moreover, minor changes requested by the user typically leads to an inefficient regeneration of the entire GUI prototype when using LLMs directly. In this work, we propose GUIDE, a novel LLM-driven GUI generation decomposition approach seamlessly integrated into the popular prototyping framework Figma. Our approach initially decomposes high-level GUI descriptions into fine-granular GUI requirements, which are subsequently translated into Material Design GUI prototypes, enabling higher controllability and more efficient adaption of changes. To efficiently conduct prompting-based generation of Material Design GUI prototypes, we propose a Retrieval-Augmented Generation (RAG) approach to integrate the component library. Our preliminary evaluation demonstrates the effectiveness of GUIDE in bridging the gap between LLM generation capabilities and traditional GUI prototyping workflows, offering a more effective and controlled user-based approach to LLM-driven GUI prototyping. Video presentation of GUIDE is available at: https://youtu.be/ODktyuQxSqo
This program is tentative and subject to change.
Wed 30 AprDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | AI for User ExperienceSE In Practice (SEIP) / Demonstrations / Journal-first Papers / Research Track at 210 | ||
11:00 15mTalk | Automated Generation of Accessibility Test Reports from Recorded User TranscriptsAward Winner Research Track Syed Fatiul Huq University of California, Irvine, Mahan Tafreshipour University of California at Irvine, Kate Kalcevich Fable Tech Labs Inc., Sam Malek University of California at Irvine | ||
11:15 15mTalk | KuiTest: Leveraging Knowledge in the Wild as GUI Testing Oracle for Mobile Apps SE In Practice (SEIP) Yongxiang Hu Fudan University, Yu Zhang Meituan, Xuan Wang Fudan University, Yingjie Liu School of Computer Science, Fudan University, Shiyu Guo Meituan, Chaoyi Chen Meituan, Xin Wang Fudan University, Yangfan Zhou Fudan University | ||
11:30 15mTalk | GUIWatcher: Automatically Detecting GUI Lags by Analyzing Mobile Application Screencasts SE In Practice (SEIP) Wei Liu Concordia University, Montreal, Canada, Feng Lin Concordia University, Linqiang Guo Concordia University, Tse-Hsun (Peter) Chen Concordia University, Ahmed E. Hassan Queen’s University | ||
11:45 15mTalk | GUIDE: LLM-Driven GUI Generation Decomposition for Automated Prototyping Demonstrations Kristian Kolthoff Institute for Software and Systems Engineering, Clausthal University of Technology, Felix Kretzer human-centered systems Lab (h-lab), Karlsruhe Institute of Technology (KIT) , Christian Bartelt , Alexander Maedche Human-Centered Systems Lab, Karlsruhe Institute of Technology, Simone Paolo Ponzetto Data and Web Science Group, University of Mannheim | ||
12:00 15mTalk | Agent for User: Testing Multi-User Interactive Features in TikTok SE In Practice (SEIP) Sidong Feng Monash University, Changhao Du Jilin University, huaxiao liu Jilin University, Qingnan Wang Jilin University, Zhengwei Lv ByteDance, Gang Huo ByteDance, Xu Yang ByteDance, Chunyang Chen TU Munich | ||
12:15 7mTalk | Bug Analysis in Jupyter Notebook Projects: An Empirical Study Journal-first Papers Taijara Santana Federal University of Bahia, Paulo Silveira Neto Federal University Rural of Pernambuco, Eduardo Santana de Almeida Federal University of Bahia, Iftekhar Ahmed University of California at Irvine |