This program is tentative and subject to change.
Leveraging deep learning (DL)-based code analysis tools to solve software engineering tasks is becoming increasingly popular. Code models often suffer performance degradation due to various reasons (e.g., code data shifts). Retraining is often required to address these issues, but frequent model updates are costly in labeling and deployment. In this paper, we explore an alternative solution: Adapting the program inputs to the code models. This can be achieved by two steps: 1) input validation that focuses on identifying whether an input is an out-of-scope input program that are beyond a model’s handling capability, and 2) input adaptation that adapts out-of-scope inputs to become in-scope inputs. Validating program input is challenging, as current techniques focus on continuous inputs such as image data and fail with discrete inputs like code data, which have unique characteristics and are processed differently by deep learning models. Adapting out-of-scope programs is also challenging due to their vast search spaces. Therefore, in this paper, we propose CodeImprove, which distinguishes out-of-scope from normal inputs and converts such out-of-scope inputs back to in-scope inputs through program transformation. In particular, we propose a validity score metric to identify out-of-scope inputs and leverage genetics algorithms to apply semantic preserving program transformation to convert out-of-scope inputs to in-scope inputs. Our experimental results show CodeImprove can enhance upto 8.78% of accuracy, and 51.28% of relative improvements in three code models on two SE tasks. Additionally, our input validation is promising in detecting outof-scope inputs (AUC score of 0.924).
This program is tentative and subject to change.
Wed 30 AprDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | SE for AI 1New Ideas and Emerging Results (NIER) / SE In Practice (SEIP) / Research Track at 215 Chair(s): Houari Sahraoui DIRO, Université de Montréal | ||
11:00 15mTalk | A Test Oracle for Reinforcement Learning Software based on Lyapunov Stability Control TheorySE for AIAward Winner Research Track Shiyu Zhang The Hong Kong Polytechnic University, Haoyang Song The Hong Kong Polytechnic University, Qixin Wang The Hong Kong Polytechnic University, Henghua Shen The Hong Kong Polytechnic University, Yu Pei The Hong Kong Polytechnic University | ||
11:15 15mTalk | CodeImprove: Program Adaptation for Deep Code ModelsSE for AI Research Track | ||
11:30 15mTalk | FairQuant: Certifying and Quantifying Fairness of Deep Neural NetworksSE for AI Research Track Brian Hyeongseok Kim University of Southern California, Jingbo Wang University of Southern California, Chao Wang University of Southern California | ||
11:45 15mTalk | When in Doubt Throw It out: Building on Confident Learning for Vulnerability DetectionSecuritySE for AI New Ideas and Emerging Results (NIER) Yuanjun Gong Renmin University of China, Fabio Massacci University of Trento; Vrije Universiteit Amsterdam | ||
12:00 15mTalk | Evaluation of Tools and Frameworks for Machine Learning Model ServingSE for AI SE In Practice (SEIP) Niklas Beck Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Benny Stein Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Dennis Wegener T-Systems International GmbH, Lennard Helmer Fraunhofer Institute for Intelligent Analysis and Information Systems | ||
12:15 15mTalk | Real-time Adapting Routing (RAR): Improving Efficiency Through Continuous Learning in Software Powered by Layered Foundation ModelsSE for AI SE In Practice (SEIP) Kirill Vasilevski Huawei Canada, Dayi Lin Centre for Software Excellence, Huawei Canada, Ahmed E. Hassan Queen’s University |