ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

Large Language Models (LLMs) have demonstrated significant capability in code generation, but their potential in code optimization remains underexplored. Previous LLM-based code optimization approaches exclusively focus on function-level optimization and overlook interaction between functions, failing to generalize to real-world development scenarios. Code editing techniques show great potential for conducting project-level code optimization, yet they face challenges associated with invalid edits and suboptimal internal functions. To address these gaps, we propose PEACE, a novel hybrid framework for \textbf{P}roject-level p\textbf{E}rformance optimization through \textbf{A}utomatic \textbf{C}ode \textbf{E}diting, which also ensures the overall correctness and integrity of the project. PEACE integrates three key phases: dependency-aware optimizing function sequence construction, valid associated edits identification, and performance editing iteration. To rigorously evaluate the effectiveness of PEACE, we construct PEACExec, the first benchmark comprising 146 real-world optimization tasks from 47 high-impact GitHub Python projects, along with highly qualified test cases and executable environments. Extensive experiments demonstrate PEACE’s superiority over the state-of-the-art baselines, achieving a 69.2% correctness rate (pass@1) and +46.9% opt rate in execution efficiency. Notably, our PEACE outperforms all baselines by significant margins, particularly in complex optimization tasks with multiple functions. Moreover, extensive experiments are also conducted to validate the contributions of each component in PEACE, as well as the rationale and effectiveness of our hybrid framework design.