Tuning LLM-based Code Optimization via Meta-Prompting: An Industrial Perspective
This program is tentative and subject to change.
There is a growing interest in leveraging large language models (LLMs) for code optimization. However, industrial platforms deploying multiple LLMs face a critical challenge: prompts optimized for one LLM often fail with others, requiring expensive model-specific prompt engineering. This cross-model prompt engineering bottleneck severely limits the practical deployment of multi-LLM optimization systems in production environments. To address this, we present Meta-Prompted Code Optimization (MPCO), a framework that automatically generates high-quality, task-specific prompts across diverse LLMs while maintaining industrial efficiency requirements. MPCO leverages meta-prompting to dynamically synthesize context-aware optimization prompts by integrating project metadata, task requirements, and LLM-specific contexts, and integrates with the ARTEMIS industrial platform for automated validation and scalable deployment.
Our comprehensive evaluation on five real-world codebases with 366 hours of runtime benchmarking demonstrates MPCO’s effectiveness: it achieves overall performance improvements up to 19.06% with the best statistical rank across all systems compared to baseline methods. Analysis shows that 96% of the top-performing optimizations stem from meaningful edits. Through systematic ablation studies and meta-prompter sensitivity analysis, we identify that comprehensive context integration is essential for effective meta-prompting, and that all three major LLMs can serve effectively as meta-prompters, providing actionable insights for industrial practitioners.
This program is tentative and subject to change.
Tue 18 NovDisplayed time zone: Seoul change
16:00 - 17:00 | |||
16:00 10mTalk | Automated Prompt Generation for Code Intelligence: An Empirical study and Experience in WeChat Industry Showcase Kexing Ji , Shiyun Fu The Chinese University of Hong Kong, Cuiyun Gao Harbin Institute of Technology, Shenzhen, Yujia Chen The Chinese University of Hong Kong, Zezhou Yang Tencent Inc., Chaozheng Wang The Chinese University of Hong Kong, Yuetang Deng Tencent | ||
16:10 10mTalk | Evaluating Large Language Models for Functional and Maintainable Code in Industrial Settings: A Case Study at ASML Industry Showcase Yash Mundhra Delft University of Technology, Max Valk ASML, Maliheh Izadi Delft University of Technology | ||
16:20 10mTalk | IntelliTopo: An IaC Generation Service for Industrial Network Topology Construction Industry Showcase Mingyu Shao Harbin Institute of Technology, Shenzhen; PengCheng Laboratory, Zhao Liu PengCheng Laboratory, Weihong Han Peng Cheng Laboratory, Cuiyun Gao Harbin Institute of Technology, Shenzhen, Jiachen Liu Harbin Institute of Technology, Shenzhen, Qing Liao Harbin Institute of Technology | ||
16:30 10mTalk | RepoMasterEval: Evaluating Code Completion via Real-World Repositories Industry Showcase Qinyun Wu Bytedance Ltd., Chao Peng ByteDance, Pengfei Gao ByteDance, Ruida Hu Harbin Institute of Technology, Shenzhen, Haoyu Gan ByteDance, Bo Jiang Bytedance Network Technology, Jinhe Tang ByteDance, Zhiwen Deng ByteDance, Zhanming Guan ByteDance, Cuiyun Gao Harbin Institute of Technology, Shenzhen, Xia Liu ByteDance, Ping Yang Bytedance Network Technology | ||
16:40 10mTalk | Multiple Schema-Conformant Declarative Code Generation NIER Track | ||
16:50 10mTalk | Tuning LLM-based Code Optimization via Meta-Prompting: An Industrial Perspective Industry Showcase Jingzhi Gong University of Leeds, Rafail Giavrimis Turing Intelligence Technology, Paul Brookes TurinTech AI, Vardan Voskanyan TurinTech AI, Fan Wu TurinTech AI, Mari Ashiga University of West London/TurinTech AI, Matthew Truscott TurinTech AI, Michail Basios Turing Intelligence Technology, Leslie Kanthan TurinTech AI, Jie Xu University of Leeds, Zheng Wang University of Leeds | ||