Prompt-with-Me: in-IDE Structured Prompt Management for LLM-Driven Software Engineering
This program is tentative and subject to change.
Large Language Models are transforming software engineering, yet prompt management in practice remains ad hoc, hindering reliability, reuse, and integration into industrial workflows. We present \textit{Prompt-with-Me}, a practical solution for structured prompt management embedded directly in the development environment. The system automatically classifies prompts using a four-dimensional taxonomy encompassing intent, author role, software development lifecycle stage, and prompt type. To enhance prompt reuse and quality, Prompt-with-Me suggests language refinements, masks sensitive information, and extracts reusable templates from a developer’s prompt library.
Our taxonomy study of 1,108 real-world prompts demonstrates that modern LLMs can accurately classify software engineering prompts. Furthermore, our user study with 11 participants shows strong developer acceptance, with high usability (Mean SUS=73), low cognitive load (Mean NASA-TLX=21), and reported gains in prompt quality and efficiency through reduced repetitive effort. Lastly, we offer actionable insights for building the next generation of prompt management and maintenance tools for software engineering workflows.
This program is tentative and subject to change.
Tue 18 NovDisplayed time zone: Seoul change
| 16:00 - 17:00 | |||
| 16:0010m Talk | An Empirical Study on UI Overlap in OpenHarmony Applications Industry Showcase | ||
| 16:1010m Talk | Metrics Driven Reengineering and Continuous Code Improvement at Meta Industry Showcase Audris Mockus University of Tennessee, Peter C Rigby Meta / Concordia University, Rui Abreu Meta, Nachiappan Nagappan Meta Platforms, Inc. | ||
| 16:2010m Talk | Prompt-with-Me: in-IDE Structured Prompt Management for LLM-Driven Software Engineering Industry Showcase Ziyou Li Delft University of Technology, Agnia Sergeyuk JetBrains Research, Maliheh Izadi Delft University of Technology | ||
| 16:3010m Talk | Are We SOLID Yet? An Empirical Study on Prompting LLMs to Detect Design Principle Violations NIER Track Fatih Pehlivan Bilkent University, Arçin Ülkü Ergüzen Bilkent University, Sahand Moslemi Yengejeh Bilkent University, Mayasah Lami Bilkent University, Anil Koyuncu Bilkent University | ||
| 16:4010m Talk | Shrunk, Yet Complete: Code Shrinking-Resilient Android Third-Party Library Detection Industry Showcase Jingkun Zhang Institute of Software, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Jingzheng Wu Institute of Software, The Chinese Academy of Sciences, Xiang Ling Institute of Software, Chinese Academy of Sciences, Tianyue Luo Institute of Software, Chinese Academy of Sciences, Bolin Zhou Institute of Software, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Mutian Yang Beijing ZhongKeWeiLan Technology Co.,Ltd. | ||
| 16:5010m Talk | LLM-Guided Genetic Improvement: Envisioning Semantic Aware Automated Software Evolution NIER Track Karine Even-Mendoza King’s College London, Alexander E.I. Brownlee University of Stirling, Alina Geiger Johannes Gutenberg University Mainz, Carol Hanna University College London, Justyna Petke University College London, Federica Sarro University College London, Dominik Sobania Johannes Gutenberg-Universität Mainz | ||


