Exploring Direct Instruction and Summary-Mediated Prompting in LLM-Assisted Code Modification
This program is tentative and subject to change.
This paper presents a study of using large language models (LLMs) in modifying existing code. While LLMs for generating code have been widely studied, their role in code modification remains less understood. Although “prompting” serves as the primary interface for developers to communicate intents to LLMs, constructing effective prompts for code modification introduces challenges different from generation. Prior work suggests that natural language summaries may help scaffold this process, yet such approaches have been validated primarily in narrow domains like SQL rewriting. This study investigates two prompting strategies for LLM-assisted code modification: Direct Instruction Prompting, where developers describe changes explicitly in free-form language, and Summary-Mediated Prompting, where changes are made by editing the generated summaries of the code. We conducted an exploratory study with 15 developers who completed modification tasks using both techniques across multiple scenarios. Our findings suggest that developers followed an iterative workflow: understanding the code, localizing the edit, and validating outputs through execution or semantic reasoning. Each prompting strategy presented trade-offs: direct instruction prompting was more flexible and easier to specify, while summary-mediated prompting supported comprehension, prompt scaffolding, and control. Developers’ choice of strategy was shaped by task goals and context, including urgency, maintainability, learning intent, and code familiarity. These findings highlight the need for more usable prompt interactions, including adjustable summary granularity, reliable summary-code traceability, and consistency in generated summaries.
This program is tentative and subject to change.
Fri 10 OctDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | |||
14:00 11mTalk | Interface Design for Autism in an Ever-Updating World Research Papers | ||
14:11 22mTalk | HiLDe: Intentional Code Generation via Human-in-the-Loop Decoding Research Papers Emmanuel Anaya Gonzalez UCSD, Raven Rothkopf University of California San Diego, Sorin Lerner University of California at San Diego, Nadia Polikarpova University of California at San Diego | ||
14:33 22mTalk | Exploring Direct Instruction and Summary-Mediated Prompting in LLM-Assisted Code Modification Research Papers Ningzhi Tang University of Notre Dame, Emory Smith University of Notre Dame, Yu Huang Vanderbilt University, Collin McMillan University of Notre Dame, Toby Jia-Jun Li University of Notre Dame Pre-print | ||
14:55 22mTalk | A Type Language for Blockly Research Papers | ||
15:17 11mTalk | TreeReader: a hierarchical academic paper reader powered by language models Research Papers Zijian Zhang University of Toronto, Pan Chen University of Toronto, Fangshi Du University of Toronto, Runlong Ye University of Toronto, Oliver Huang University of Toronto, Michael Liut University of Toronto Mississauga, Alán Aspuru-Guzik University of Toronto |