ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal

Large language models (LLMs) have revolutionized many areas (e.g., natural language processing, software engineering, etc) by achieving state-of-the-art performance on extensive downstream tasks. Aiming to achieve robust and general artificial intelligence, there has been a surge of interest in investigating the reasoning ability of the LLMs. Whereas the textual and numerical reasoning benchmarks adopted by previous works are rather shallow and simple in nature, it is hard to conclude that the LLMs possess strong reasoning ability by merely achieving positive results on these benchmarks. Recent effort has demonstrated that the LLMs are poor at solving sequential decision-making problems that require common-sense planning by evaluating their performance on the reinforcement learning benchmarks. In this work, we conduct an in-depth assessment of several state-of-the-art LLMs’ reasoning ability based on the inductive logic programming (ILP) benchmark, which is broadly recognized as a representative and challenging measurement for evaluating logic program induction/synthesis systems as it requires inducing strict cause-effect logic so as to achieve robust deduction on independent and identically distributed (IID) and out-of-distribution (OOD) test samples. Our evaluations illustrate that compared with the neural program induction systems which are much smaller in model size, the state-of-the-art LLMs are much poorer in terms of reasoning ability by achieving much lower performance and generalization using either natural language prompting or truth-value matrix prompting.

Sat 20 Apr

Displayed time zone: Lisbon change

16:00 - 17:30
Session 4: Full Papers + Award & ClosingLLM4Code at Luis de Freitas Branco
Chair(s): Prem Devanbu University of California at Davis
16:00
10m
Talk
Investigating the Proficiency of Large Language Models in Formative Feedback Generation for Student Programmers
LLM4Code
Smitha S Kumar Heriot-Watt University -UAE, Michael Lones Heriot Watt University- UK, Manuel Maarek Heriot-Watt University, Hind Zantout Heriot-Watt University -UAE
Pre-print
16:10
10m
Talk
Tackling Students' Coding Assignments with LLMs
LLM4Code
Adam Dingle Charles University, Martin Kruliš Charles University
Pre-print
16:20
10m
Talk
Applying Large Language Models to Enhance the Assessment of Parallel Functional Programming AssignmentsBest Presentation Award
LLM4Code
Skyler Grandel Vanderbilt University, Douglas C. Schmidt Vanderbilt University, Kevin Leach Vanderbilt University
Pre-print
16:30
10m
Talk
An Empirical Study on Usage and Perceptions of LLMs in a Software Engineering Project
LLM4Code
Sanka Rasnayaka National University of Singapore, Wang Guanlin National University of Singapore, Ridwan Salihin Shariffdeen National University of Singapore, Ganesh Neelakanta Iyer National University of Singapore
Pre-print
16:40
10m
Talk
LLMs for Relational Reasoning: How Far are We?
LLM4Code
Zhiming Li Nanyang Technological University, Singapore, Yushi Cao Nanyang Technological University, Xiufeng Xu Nanyang Technological University, Junzhe Jiang Hong Kong Polytechnic University, Xu Liu North Carolina State University, Yon Shin Teo Continental Automotive Singapore Pte. Ltd., Shang-Wei Lin Nanyang Technological University, Yang Liu Nanyang Technological University
Pre-print
16:50
10m
Talk
HawkEyes: Spotting and Evading Instruction Disalignments of LLMs
LLM4Code
Dezhi Ran Peking University, Zihe Song University of Texas at Dallas, Wenhan Zhang Peking University, Wei Yang University of Texas at Dallas, Tao Xie Peking University
17:00
10m
Talk
Semantically Aligned Question and Code Generation for Automated Insight GenerationBest Paper Award
LLM4Code
Ananya Singha Microsoft, Bhavya Chopra Microsoft, Anirudh Khatry Microsoft, Sumit Gulwani Microsoft, Austin Henley University of Tennessee, Vu Le Microsoft, Chris Parnin Microsoft, Mukul Singh Microsoft, Gust Verbruggen Microsoft
Pre-print
17:10
20m
Day closing
Award & Closing
LLM4Code