ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal
Fri 19 Apr 2024 12:00 - 12:15 at Grande Auditório - LLM, NN and other AI technologies 5 Chair(s): Baishakhi Ray

Recent research has explored the creation of questions from code submitted by students. These Questions about Learners’ Code (QLCs) are created through program analysis, exploring execution paths, and then creating code comprehension questions from these paths and the broader code structure. Responding to the questions requires reading and tracing the code, which is known to support students’ learning. At the same time, computing education researchers have witnessed the emergence of Large Language Models (LLMs) that have taken the community by storm. Researchers have demonstrated the applicability of these models especially in the introductory programming context, outlining their performance in solving introductory programming problems and their utility in creating new learning resources. In this work, we explore the capability of the state-of-the-art LLMs (GPT-3.5 and GPT-4) in answering QLCs that are generated from code that the LLMs have created. Our results show that although the state-of-the-art LLMs can create programs and trace program execution when prompted, they easily succumb to similar errors that have previously been recorded for novice programmers. These results demonstrate the fallibility of these models and perhaps dampen the expectations fueled by the recent LLM hype. At the same time, we also highlight future research possibilities such as using LLMs to mimic students as their behavior can indeed be similar for some specific tasks.

Fri 19 Apr

Displayed time zone: Lisbon change

11:00 - 12:30
11:00
15m
Talk
Enhancing Exploratory Testing by Large Language Model and Knowledge Graph
Research Track
Yanqi Su Australian National University, Dianshu Liao Australian National University, Zhenchang Xing CSIRO's Data61, Qing Huang School of Computer Information Engineering, Jiangxi Normal University, Mulong Xie CSIRO's Data61, Qinghua Lu Data61, CSIRO, Xiwei (Sherry) Xu Data61, CSIRO
11:15
15m
Talk
LLMParser: An Exploratory Study on Using Large Language Models for Log Parsing
Research Track
Zeyang Ma Concordia University, An Ran Chen University of Alberta, Dong Jae Kim Concordia University, Tse-Hsun (Peter) Chen Concordia University, Shaowei Wang Department of Computer Science, University of Manitoba, Canada
11:30
15m
Talk
Enhancing Text-to-SQL Translation for Financial System Design
Software Engineering in Practice
Yewei Song University of Luxembourg, Saad Ezzini Lancaster University, Xunzhu Tang University of Luxembourg, Cedric Lothritz University of Luxembourg, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg, Andrey Boytsov Banque BGL BNP Paribas, Ulrick Ble Banque BGL BNP Paribas, Anne Goujon Banque BGL BNP Paribas
11:45
15m
Talk
Towards Building AI-CPS with NVIDIA Isaac Sim: An Industrial Benchmark and Case Study for Robotics Manipulation
Software Engineering in Practice
Zhehua Zhou University of Alberta, Jiayang Song University of Alberta, Xuan Xie University of Alberta, Zhan Shu University of Alberta, Lei Ma The University of Tokyo & University of Alberta, Dikai Liu NVIDIA AI Tech Centre, Jianxiong Yin NVIDIA AI Tech Centre, Simon See NVIDIA AI Tech Centre
Pre-print
12:00
15m
Talk
Let's Ask AI About Their Programs: Exploring ChatGPT's Answers To Program Comprehension Questions
Software Engineering Education and Training
Teemu Lehtinen Aalto University, Charles Koutcheme Aalto University, Arto Hellas Aalto University
Pre-print Media Attached File Attached
12:15
15m
Talk
Experience Report: Identifying common misconceptions and errors of novice programmers with ChatGPT
Software Engineering Education and Training
Hua Leong Fwa Singapore Management University
Media Attached