Coverage-Based Harmfulness Testing for LLM Code Transformation
This program is tentative and subject to change.
Harmful content embedded in program elements within source code may have detrimental impact on mental health of software developers, and promote harmful behavior. Our key insight is that software developers may introduce harmful content into source code when using Code Large Language Models (Code LLMs) to perform program transformations tasks. To understand the space of program transformations that may be used to introduce harmful content into auto-generated code, we conduct a preliminary study that revealed 32 different types of transformations that can be used to introduce harmful content in source code. Based on our study, we propose CHT, a novel coverage-guided harmfulness testing framework that automatically synthesizes prompts using a set of prompt templates injected with diverse harmful keywords to perform various types of transformations on a set of mined benign programs. Instead of checking if the content moderation has been bypassed as prior approaches, CHT performs output damage measurement to assess potential harm that can be introduced by the generated outputs (i.e., natural language explanation and modified code). By considering output damage, CHT revealed several problems in Code LLMs: (1) bugs in content moderation for code (Code LLMs produce the harmful code without providing any warning), (2) inadequacy in performing code-related task (e.g., Code LLMs may resort to explaining the given code instead of performing the instructed transformation task), and (3) lenient content moderation (gives warning but the modified code with harmful content is still produced). Our evaluations of CHT on four Code LLMs and gpt-4o-mini (general LLM) show that content moderation in Code LLMs is relatively easy to bypass where LLMs may generate harmful keywords embedded within identifier names or code comments without giving any warning (65.93% in our evaluation). To improve the robustness of content moderation in code-related tasks, we propose a two-phrase approach that checks if the prompt contains any harmful content before generating any output. Our evaluation shows that our proposed approach improves the content moderation of Code LLM by 483.76%.
This program is tentative and subject to change.
Tue 18 NovDisplayed time zone: Seoul change
11:00 - 12:30 | |||
11:00 10mTalk | Coverage-Based Harmfulness Testing for LLM Code Transformation Research Papers Honghao Tan Concordia University, Haibo Wang Concordia University, Diany Pressato Concordia University, Yisen Xu Software PErformance, Analysis, and Reliability (SPEAR) lab, Concordia University, Montreal, Canada, Shin Hwei Tan Concordia University | ||
11:10 10mTalk | RealisticCodeBench: Towards More Realistic Evaluation of Large Language Models for Code Generation Research Papers Xiao Yu Zhejiang University, Haoxuan Chen Wuhan University of Technology, Lei Liu Xi’an Jiaotong University, Xing Hu Zhejiang University, Jacky Keung City University of Hong Kong, Xin Xia Zhejiang University | ||
11:20 10mTalk | Code-DiTing: Automatic Evaluation of Code Generation without References or Test Cases Research Papers Guang Yang , Yu Zhou Nanjing University of Aeronautics and Astronautics, Xiang Chen Nantong University, Wei Zheng Northwestern Polytechnical University, Xing Hu Zhejiang University, Xin Zhou Singapore Management University, Singapore, David Lo Singapore Management University, Taolue Chen Birkbeck, University of London Pre-print | ||
11:30 10mTalk | An Agent-based Evaluation Framework for Complex Code Generation Research Papers Xinchen Wang Harbin Institute of Technology, Pengfei Gao ByteDance, Chao Peng ByteDance, Ruida Hu Harbin Institute of Technology, Shenzhen, Cuiyun Gao Harbin Institute of Technology, Shenzhen | ||
11:40 10mTalk | PseudoFix: Refactoring Distorted Structures in Decompiled C Pseudocode Research Papers Gangyang Li University of Science and Technology of China, Xiuwei Shang University of Science and Technology of China, Shaoyin Cheng University of Science and Technology of China, junqi zhang University of Science and Technology of China, Li Hu , Xu Zhu University of Science and Technology of China, Weiming Zhang University of Science and Technology of China, Nenghai Yu School of Cyber Security, University of Science and Technology of China | ||
11:50 10mTalk | Evaluating and Improving Framework-based Parallel Code Completion with Large Language Models Research Papers Ke Liu , Qinglin Wang Shandong Normal University, Xiang Chen Nantong University, Guang Yang , YiGui Feng National University of Defense Technology, Gencheng Liu National University of Defense Technology, Jie Liu Institute of Software, Chinese Academy of Sciences | ||
12:00 10mTalk | Variational Prefix Tuning for diverse and accurate code summarization using pre-trained language models Journal-First Track Junda Zhao Department of Mechanical and Industrial Engineering, University of Toronto, Yuliang Song Department of Mechanical and Industrial Engineering, University of Toronto, Eldan Cohen Department of Mechanical and Industrial Engineering, University of Toronto | ||
12:10 10mTalk | Effective Code Membership Inference for Code Completion Models via Adversarial Prompts Research Papers Yuan Jiang Harbin Institute of Technology, Zehao Li Harbin Institute of Technology, Shan Huang East China Normal University, Christoph Treude Singapore Management University, Xiaohong Su Harbin Institute of Technology, Tiantian Wang Harbin Institute of Technology | ||
12:20 10mTalk | LongCodeZip: Compress Long Context for Code Language Models Research Papers Yuling Shi Shanghai Jiao Tong University, Yichun Qian Stanford University, Hongyu Zhang Chongqing University, Beijun Shen Shanghai Jiao Tong University, Xiaodong Gu Shanghai Jiao Tong University Pre-print Media Attached | ||