ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal
Thu 18 Apr 2024 11:45 - 12:00 at Pequeno Auditório - LLM, NN and other AI technologies 3 Chair(s): Tushar Sharma

Code reviews are a critical part of the software development process, taking a significant amount of the code authors’ and the code reviewers’ time. As part of this process, the reviewer inspects the proposed code and asks the author for code changes through comments written in natural language. At Google, we see millions of reviewer comments per year, and authors require an average of ∼60 minutes active shepherding time between sending changes for review and finally submitting the change. In our measurements, the required active work time that the code author must devote to address reviewer comments grows almost linearly with the number of comments. However, with machine learning (ML), we have an opportunity to automate and streamline the code-review process, e.g., by proposing code changes based on a comment’s text. We describe our application of recent advances in large sequence models in a real-world setting to automatically resolve code-review comments in the day-to-day development workflow at Google.

We present the evolution of this feature from an asynchronous generation of suggested edits after the reviewer sends feedback to an interactive experience that suggests code edits to the reviewer at review time. In deployment, code-change authors at Google address almost 5% of all reviewer comments by applying an ML-suggested edit, and this number is likely to reach 10% with a promising new version of the assistant. The impact of this will be to reduce the time spent on code reviews by hundreds of thousands of engineer hours annually at Google scale. Unsolicited, very positive feedback highlights that the impact of ML-suggested code edits increases Googlers’ productivity and allows them to focus on more creative and complex tasks.

Thu 18 Apr

Displayed time zone: Lisbon change

11:00 - 12:30
11:00
15m
Talk
Xpert: Empowering Incident Management with Query Recommendations via Large Language Models
Research Track
Yuxuan Jiang University of Michigan Ann-Arbor, Chaoyun Zhang Microsoft, Shilin He Microsoft Research, Zhihao Yang Peking University, Minghua Ma Microsoft Research, Si Qin Microsoft Research, Yu Kang Microsoft Research, Yingnong Dang Microsoft Azure, Saravan Rajmohan Microsoft 365, Qingwei Lin Microsoft, Dongmei Zhang Microsoft Research
11:15
15m
Talk
Tensor-Aware Energy Accounting
Research Track
Timur Babakol SUNY Binghamton, USA, Yu David Liu SUNY Binghamton
DOI Pre-print
11:30
15m
Talk
LLM4PLC: Harnessing Large Language Models for Verifiable Programming of PLCs in Industrial Control Systems
Software Engineering in Practice
Mohamad Fakih University of California, Irvine, Rahul Dharmaji University of California, Irvine, Yasamin Moghaddas University of California, Irvine, Gustavo Quiros Siemens Technology, Tosin Ogundare Siemens Technology, Mohammad Al Faruque UCI
11:45
15m
Talk
Resolving Code Review Comments with Machine Learning
Software Engineering in Practice
Alexander Frömmgen Google, Jacob Austin Google, Peter Choy Google, Nimesh Ghelani Google, Lera Kharatyan Google, Gabriela Surita Google, Elena Khrapko Google, Pascal Lamblin Google, Pierre-Antoine Manzagol Google, Marcus Revaj Google, Maxim Tabachnyk Google, Danny Tarlow Google, Kevin Villela Google, Dan Zheng Google DeepMind, Satish Chandra Google, Inc, Petros Maniatis Google DeepMind
12:00
15m
Talk
LLMs Still Can't Avoid Instanceof: An investigation Into GPT-3.5, GPT-4 and Bard's Capacity to Handle Object-Oriented Programming Assignments
Software Engineering Education and Training
Bruno Pereira Cipriano Lusófona University, COPELABS, Pedro Alves Lusófona University, COPELABS
12:15
7m
Talk
Leveraging Large Language Models to Improve REST API Testing
New Ideas and Emerging Results
Myeongsoo Kim Georgia Institute of Technology, Tyler Stennett Georgia Institute of Technology, Dhruv Shah Georgia Institute of Technology, Saurabh Sinha IBM Research, Alessandro Orso Georgia Institute of Technology
Pre-print
12:22
7m
Talk
LogExpert: Log-based Recommended Resolutions Generation using Large Language Model
New Ideas and Emerging Results
JiaboWang Beijing University of Posts and Telecommunications, guojun chu Beijing University of Posts and Telecommunications, Jingyu Wang , Haifeng Sun Beijing University of Posts and Telecommunications, Qi Qi , Yuanyi Wang Beijing University of Posts and Telecommunications, Ji Qi China Mobile (Suzhou) Software Technology Co., Ltd., Jianxin Liao Beijing University of Posts and Telecommunications