Most existing pre-trained language models for source code focus on learning the static code text, typically augmented with static code structures (abstract syntax tree, dependency graphs, etc.). However, program semantics will not be fully exposed before the real execution. Without an understanding of the program execution, statically pre-trained models fail to comprehensively capture the dynamic code properties, such as the branch coverage and the runtime variable values, and they are consequently less effective at code understanding tasks, such as retrieving semantic clones and detecting software vulnerabilities.
To close the gap between the static nature of language models and the dynamic characteristics of programs, we introduce TRACED, an execution-aware pre-training strategy for source code. Specifically, we pre-train code language models with a combination of source code, executable inputs, and corresponding execution traces. Our goal is to teach code models the complicated execution logic during the pre-training, enabling the model to statically estimate the dynamic code properties without repeatedly executing code during task-specific fine-tuning.
To illustrate the effectiveness of our proposed approach, we fine-tune and evaluate TRACED on three downstream tasks: static execution estimation, clone retrieval, and vulnerability detection. The empirical results show that TRACED relatively improves the statically pre-trained code models by 12.4% for complete execution path prediction and by 25.2% for runtime variable value predictions. TRACED also significantly outperforms statically pre-trained models in clone retrieval and vulnerability detection across four public benchmarks.
Thu 18 AprDisplayed time zone: Lisbon change
11:00 - 12:30 | Language Models and Generated Code 2Demonstrations / Research Track at Maria Helena Vieira da Silva Chair(s): Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign | ||
11:00 15mTalk | Exploring the Potential of ChatGPT in Automated Code Refinement: An Empirical Study Research Track Qi Guo Tianjin University, China, Junming Cao Fudan University, Xiaofei Xie Singapore Management University, Shangqing Liu Nanyang Technological University, Xiaohong Li Tianjin University, Bihuan Chen Fudan University, Xin Peng Fudan University | ||
11:15 15mTalk | Deep Learning or Classical Machine Learning? An Empirical Study on Log-Based Anomaly Detection Research Track BoXi Yu The Chinese University of Hong Kong, Shenzhen, Jiayi Yao The Chinese University of Hong Kong, Shenzhen, Qiuai Fu Huawei Cloud Computing Technologies CO., LTD., Zhiqing Zhong Chinese University of Hong Kong, Shenzhen, Haotian Xie The Chinese University of Hong Kong, Shenzhen, Yaoliang Wu Huawei Cloud Computing Technologies Co., Ltd., Yuchi Ma Huawei Cloud Computing Technologies CO., LTD., Pinjia He Chinese University of Hong Kong, Shenzhen | ||
11:30 15mTalk | TRACED: Execution-aware Pre-training for Source Code Research Track Yangruibo Ding Columbia University, Benjamin Steenhoek Iowa State University, Kexin Pei The University of Chicago, Gail Kaiser Columbia University, Wei Le Iowa State University, Baishakhi Ray AWS AI Labs | ||
11:45 15mTalk | On Extracting Specialized Code Abilities from Large Language Models: A Feasibility Study Research Track Li Zongjie Hong Kong University of Science and Technology, Chaozheng Wang The Chinese University of Hong Kong, Pingchuan Ma HKUST, Chaowei Liu National University of Singapore, Shuai Wang The Hong Kong University of Science and Technology, Daoyuan Wu Nanyang Technological University, Cuiyun Gao Harbin Institute of Technology, Yang Liu Nanyang Technological University | ||
12:00 15mTalk | When Neural Code Completion Models Size up the Situation: Attaining Cheaper and Faster Completion through Dynamic Model Inference Research Track Zhensu Sun Singapore Management University, Xiaoning Du Monash University, Australia, Fu Song State Key Laboratory of Computer Science and Institute of Software, Chinese Academy of Sciences., Shangwen Wang National University of Defense Technology, Li Li Beihang University Pre-print | ||
12:15 7mTalk | TestSpark: IntelliJ IDEA’s Ultimate Test Generation Companion Demonstrations Arkadii Sapozhnikov JetBrains Research, Mitchell Olsthoorn Delft University of Technology, Annibale Panichella Delft University of Technology, Vladimir Kovalenko JetBrains Research, Pouria Derakhshanfar JetBrains Research |