ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal

Large Language Models (LLMs) represent a leap in artificial intelligence, excelling in tasks using human language(s). Although the main focus of general-purpose LLMs is not code generation, they have shown promising results in the domain. However, the usefulness of LLMs in large-scale software engineering development have not been fully explored yet. In this study we explore the usefulness of LLMs for 214 students working on an academic software engineering project in teams consisting of up to six members. The students were encouraged to integrate LLMs into their development tool-chain.

In this paper, we analyze statistics for the AI generated code, prompts used for code generation and human intervention levels to integrate the code. We also conduct a perception study to gain insights into the perceived usefulness, influencing factors, and future outlook from a student perspective. Our findings suggest that LLMs can play a crucial role in the early stages of software development, especially in generating foundational code structures, and helping with syntax and error debugging. These insights provide us with a framework on how to effectively utilize LLMs as a tool to enhance the productivity of software engineering students, and highlight the necessity of shifting the educational focus toward preparing students for successful human-AI collaboration.

Sat 20 Apr

Displayed time zone: Lisbon change

16:00 - 17:30
Session 4: Full Papers + Award & ClosingLLM4Code at Luis de Freitas Branco
Chair(s): Prem Devanbu University of California at Davis
16:00
10m
Talk
Investigating the Proficiency of Large Language Models in Formative Feedback Generation for Student Programmers
LLM4Code
Smitha S Kumar Heriot-Watt University -UAE, Michael Lones Heriot Watt University- UK, Manuel Maarek Heriot-Watt University, Hind Zantout Heriot-Watt University -UAE
Pre-print
16:10
10m
Talk
Tackling Students' Coding Assignments with LLMs
LLM4Code
Adam Dingle Charles University, Martin Kruliš Charles University
Pre-print
16:20
10m
Talk
Applying Large Language Models to Enhance the Assessment of Parallel Functional Programming AssignmentsBest Presentation Award
LLM4Code
Skyler Grandel Vanderbilt University, Douglas C. Schmidt Vanderbilt University, Kevin Leach Vanderbilt University
Pre-print
16:30
10m
Talk
An Empirical Study on Usage and Perceptions of LLMs in a Software Engineering Project
LLM4Code
Sanka Rasnayaka National University of Singapore, Wang Guanlin National University of Singapore, Ridwan Salihin Shariffdeen National University of Singapore, Ganesh Neelakanta Iyer National University of Singapore
Pre-print
16:40
10m
Talk
LLMs for Relational Reasoning: How Far are We?
LLM4Code
Zhiming Li Nanyang Technological University, Singapore, Yushi Cao Nanyang Technological University, Xiufeng Xu Nanyang Technological University, Junzhe Jiang Hong Kong Polytechnic University, Xu Liu North Carolina State University, Yon Shin Teo Continental Automotive Singapore Pte. Ltd., Shang-Wei Lin Nanyang Technological University, Yang Liu Nanyang Technological University
Pre-print
16:50
10m
Talk
HawkEyes: Spotting and Evading Instruction Disalignments of LLMs
LLM4Code
Dezhi Ran Peking University, Zihe Song University of Texas at Dallas, Wenhan Zhang Peking University, Wei Yang University of Texas at Dallas, Tao Xie Peking University
17:00
10m
Talk
Semantically Aligned Question and Code Generation for Automated Insight GenerationBest Paper Award
LLM4Code
Ananya Singha Microsoft, Bhavya Chopra Microsoft, Anirudh Khatry Microsoft, Sumit Gulwani Microsoft, Austin Henley University of Tennessee, Vu Le Microsoft, Chris Parnin Microsoft, Mukul Singh Microsoft, Gust Verbruggen Microsoft
Pre-print
17:10
20m
Day closing
Award & Closing
LLM4Code