Understanding code is challenging, especially when working in new and complex development environments. Code comments and documentation can help, but are typically scarce or hard to navigate. Large language models (LLMs) are revolutionizing the process of writing code. Can they do the same for helping understand it? In this study, we provide a first investigation of an LLM-based conversational UI built directly in the IDE that is geared towards code understanding. Our IDE plugin queries OpenAI’s GPT-3.5 and GPT-4 models with four high-level requests without the user having to write explicit prompts: to explain a highlighted section of code, provide details of API calls used in the code, explain key domain-specific terms, and provide usage examples for an API. The plugin also allows for open-ended prompts, which are automatically contextualized to the LLM with the program being edited. We evaluate this system in a user study with 32 participants, which confirms that using our plugin can aid task completion more than web search. We additionally provide a thorough analysis of the ways developers use, and perceive the usefulness of, our system, among others finding that the usage and benefits differ significantly between students and professionals. We conclude that in-IDE prompt-less interaction with LLMs is a promising future direction for tool builders.
Thu 18 AprDisplayed time zone: Lisbon change
14:00 - 15:30 | LLM, NN and other AI technologies 4Research Track / Industry Challenge Track / New Ideas and Emerging Results at Pequeno Auditório Chair(s): David Nader Palacio William & Mary | ||
14:00 15mTalk | Programming Assistant for Exception Handling with CodeBERT Research Track Yuchen Cai University of Texas at Dallas, Aashish Yadavally University of Texas at Dallas, Abhishek Mishra University of Texas at Dallas, Genesis Montejo University of Texas at Dallas, Tien N. Nguyen University of Texas at Dallas | ||
14:15 15mTalk | An Empirical Study on Noisy Label Learning for Program Understanding Research Track Wenhan Wang Nanyang Technological University, Yanzhou Li Nanyang Technological University, Anran Li Nanyang Technological University, Jian Zhang Nanyang Technological University, Wei Ma Nanyang Technological University, Singapore, Yang Liu Nanyang Technological University Pre-print | ||
14:30 15mTalk | An Empirical Study on Low GPU Utilization of Deep Learning Jobs Research Track Yanjie Gao Microsoft Research, yichen he , Xinze Li Microsoft Research, Bo Zhao Microsoft Research, Haoxiang Lin Microsoft Research, Yoyo Liang Microsoft, Jing Zhong Microsoft, Hongyu Zhang Chongqing University, Jingzhou Wang Microsoft Research, Yonghua Zeng Microsoft, Keli Gui Microsoft, Jie Tong Microsoft, Mao Yang Microsoft Research DOI Pre-print | ||
14:45 15mTalk | Using an LLM to Help With Code Understanding Research Track Daye Nam Carnegie Mellon University, Andrew Macvean Google, Inc., Vincent J. Hellendoorn Carnegie Mellon University, Bogdan Vasilescu Carnegie Mellon University, Brad A. Myers Carnegie Mellon University | ||
15:00 15mTalk | MissConf: LLM-Enhanced Reproduction of Configuration-Triggered Bugs Industry Challenge Track Ying Fu National University of Defense Technology, Teng Wang National University of Defense Technology, Shanshan Li National University of Defense Technology, Jinyan Ding National University of Defense Technolog, Shulin Zhou National University of Defense Technology, Zhouyang Jia National University of Defense Technology, Wang Li National University of Defense Technology, Yu Jiang Tsinghua University, Liao Xiangke National University of Defense Technology File Attached | ||
15:15 7mTalk | XAIport: A Service Framework for the Early Adoption of XAI in AI Model Development New Ideas and Emerging Results Zerui Wang Concordia University, Yan Liu Concordia University, Abishek Arumugam Thiruselvi Concordia University, Wahab Hamou-Lhadj Concordia University, Montreal, Canada DOI Pre-print | ||
15:22 7mTalk | Which Syntactic Capabilities Are Statistically Learned by Masked Language Models for Code? New Ideas and Emerging Results Alejandro Velasco William & Mary, David Nader Palacio William & Mary, Daniel Rodriguez-Cardenas , Denys Poshyvanyk William & Mary Pre-print |