ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

This program is tentative and subject to change.

Mon 17 Nov 2025 12:04 - 12:17 at Grand Hall 4 - Efficiency & Fairness 1

\emph{Large Language Models} (LLMs) are increasingly adopted to optimize source code, offering the promise of faster, more efficient programs without manual tuning. This capability is particularly appealing in the context of sustainable computing, where enhanced performance is often assumed to correspond to reduced energy consumption. However, LLMs themselves are energy- and resource-intensive, raising critical questions about whether their use for code optimization is energetically justified. Prior work mainly focused on runtime performance gains, leaving a gap in our understanding of the broader energy implications of LLM-based code optimization.

In this paper, we report on a systematic, energy-focused evaluation of LLM-based code optimization methods. Relying on $118$ tasks from the EvalPerf benchmark, we assess the trade-offs between code performance, correctness, and energy consumption of multiple optimization methods across multiple families of LLMs. We introduce the \emph{Break-Even Point} (BEP) as a key metric to quantify the number of executions required for an optimized program to outweigh the energy consumed when generating the optimization itself.

Our results show that, while certain configurations achieve substantial speedups and energy reductions, these benefits often demand from hundreds to hundreds of thousands of executions to become energetically profitable. Moreover, the optimization process often yields incorrect or less efficient code. Importantly, we identify a weak negative correlation between performance gains and actual energy savings, challenging assumptions that faster code automatically equates to a smaller energy footprint. This work underscores the necessity of energy-aware optimization strategies. Practitioners should carefully target LLM-based optimization efforts to high-frequency, high-impact workloads, while monitoring energy consumption across the entire life-cycle of development and deployment.

This program is tentative and subject to change.

Mon 17 Nov

Displayed time zone: Seoul change

11:00 - 12:30
Efficiency & Fairness 1Research Papers at Grand Hall 4
11:00
12m
Talk
AutoFid: Adaptive and Noise-Aware Fidelity Measurement for Quantum Programs via Circuit Graph Analysis
Research Papers
Tingting Li Zhejiang University, Ziming Zhao Zhejiang University, Jianwei Yin Zhejiang University
11:12
12m
Talk
HybridSIMD: A Super C++ SIMD Library with Integrated Auto-tuning Capabilities
Research Papers
Haolin Pan Institute of Software, Chinese Academy of Sciences;School of Intelligent Science and Technology, HIAS, UCAS, Hangzhou;University of Chinese Academy of Sciences, Xulin Zhou Institute of Software, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Mingjie Xing Institute of Software, Chinese Academy of Sciences, Yanjun Wu Institute of Software, Chinese Academy of Sciences
11:25
12m
Talk
PEACE: Towards Efficient Project-Level Performance Optimization via Hybrid Code Editing
Research Papers
Xiaoxue Ren Zhejiang University, Jun Wan Zhejiang University, Yun Peng The Chinese University of Hong Kong, Zhongxin Liu Zhejiang University, Ming Liang Ant Group, Dajun Chen Ant Group, Wei Jiang Ant Group, Yong Li Ant Group
11:38
12m
Talk
CoTune: Co-evolutionary Configuration Tuning
Research Papers
Gangda Xiong University of Electronic Science and Technology of China, Tao Chen University of Birmingham
Pre-print
11:51
12m
Talk
It's Not Easy Being Green: On the Energy Efficiency of Programming Languages
Research Papers
Nicolas van Kempen University of Massachusetts Amherst, USA, Hyuk-Je Kwon University of Massachusetts Amherst, Dung Nguyen University of Massachusetts Amherst, Emery D. Berger University of Massachusetts Amherst and Amazon Web Services
12:04
12m
Talk
When Faster Isn't Greener: The Hidden Costs of LLM-Based Code Optimization
Research Papers
Tristan Coignion Université de Lille - Inria, Clément Quinton Université de Lille, Romain Rouvoy University Lille 1 and INRIA
12:17
12m
Talk
United We Stand: Towards End-to-End Log-based Fault Diagnosis via Interactive Multi-Task Learning
Research Papers
Minghua He Peking University, Chiming Duan Peking University, Pei Xiao Peking University, Tong Jia Institute for Artificial Intelligence, Peking University, Beijing, China, Siyu Yu The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Lingzhe Zhang Peking University, China, Weijie Hong Peking university, Jing Han ZTE Corporation, Yifan Wu Peking University, Ying Li School of Software and Microelectronics, Peking University, Beijing, China, Gang Huang Peking University