ICSE 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil

This program is tentative and subject to change.

Thu 16 Apr 2026 12:15 - 12:30 at Oceania VII - Software Engineering for AI 4 Chair(s): Nathan Wintersgill

Source code is usually formatted with elements like indentation and newlines to improve readability for human developers. However, these visual aids do not seem to be beneficial for large language models (LLMs) in the same way since the code is processed as a linear sequence of tokens. Furthermore, these additional tokens can lead to increased computational costs and longer response times for LLMs. If such formatting elements are non-essential to LLMs, we can reduce such costs by removing them from the code. To figure out the role played by formatting elements, we conduct a comprehensive empirical study to evaluate the impact of code formatting on LLM performance and efficiency. Through large-scale experiments on Fill-in-the-Middle Code Completion tasks across four programming languages (Java, Python, C++, C#) and ten LLMs—including both commercial and open-source models—we systematically analyze token count and performance when formatting elements are removed. Key findings indicate that LLMs can maintain performance across formatted code and unformatted code, achieving an average input token reduction of 24.5% with negligible output token reductions. This makes code format removal a practical optimization strategy for improving LLM efficiency. Further exploration reveals that both prompting and fine-tuning LLMs can lead to significant reductions (up to 36.1%) in output code length without compromising correctness. To facilitate practical applications, we develop a bidirectional code transformation tool for format processing, which can be seamlessly integrated into existing LLM inference workflows, ensuring both human readability and LLM efficiency.

This program is tentative and subject to change.

Thu 16 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

11:00 - 12:30
11:00
15m
Talk
NeMo: A Neuron-level Modularizing-While-Training Approach for Decomposing DNN Models
Journal-first Papers
Xiaohan Bi Beihang University, Binhang Qi National University of Singapore, Hailong Sun Beihang University, Xiang Gao Beihang University, Yue Yu PengCheng Lab, Xiaojun Liang PengCheng Lab
11:15
15m
Talk
A Selective Quantization Tuner for ONNX Models
New Ideas and Emerging Results (NIER)
Nikolaos Louloudakis The University of Edinburgh, Ajitha Rajan The University of Edinburgh
11:30
15m
Paper
Green LLM Techniques in Action: How Effective Are Existing Techniques for Improving the Energy Efficiency of LLM-Based Applications in Industry?
SE In Practice (SEIP)
Pelin Rabia Kuran Vrije Universiteit Amsterdam, Rumbidzai Chitakunye Vrije Universiteit Amsterdam, Vincenzo Stoico Vrije Universiteit Amsterdam, Ilja Heitlager Schuberg Philis, Justus Bogner Vrije Universiteit Amsterdam
DOI Pre-print
11:45
15m
Talk
DNN Modularization via Activation-Driven Training
Research Track
Tuan Ngo University of Southern California, Abid Hassan University of Southern California, Saad Shafiq University of Southern California, Nenad Medvidović University of Southern California
Pre-print
12:00
15m
Talk
ModularEvo: Evolving Multi-Task Models via Neural Network Modularization and Composition
Research Track
Wenrui Long Beihang university, Binhang Qi Beihang University, Hailong Sun Beihang University, ZongZhen Yang Beihang University, Ruobing Zhao Beihang University, Xiang Gao Beihang University
12:15
15m
Talk
The Hidden Cost of Readability: How Code Formatting Silently Consumes Your LLM BudgetDistinguished Paper Award
Research Track
Dangfeng Pan , Zhensu Sun Singapore Management University, cenyuan zhang Monash University, David Lo Singapore Management University, Xiaoning Du Monash University
Hide past events