ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Thu 31 Oct 2024 15:30 - 15:45 at Gardenia - Malicious code and package Chair(s): Curtis Atkisson

The emergence of Large Language Models (LLMs) has significantly influenced various aspects of software development activities. Despite their benefits, LLMs also pose notable risks, including the potential to generate harmful content and being abused by malicious developers to create malicious code. Several previous studies have focused on the ability of LLMs to resist the generation of harmful content that violates human ethical standards, such as biased or offensive content. However, there is no research evaluating the ability of LLMs to resist malicious code generation. To fill this gap, we propose RMCBench, the \textbf{first} benchmark comprising 473 prompts designed to assess the ability of LLMs to resist malicious code generation. This benchmark employs two scenarios: a \textit{text-to-code} scenario, where LLMs are prompted with descriptions to generate code, and a \textit{code-to-code} scenario, where LLMs translate or complete existing malicious code. Based on RMCBench, we conduct an empirical study on the 11 representative LLMs to assess their ability to resist malicious code generation. Our findings indicate that current LLMs have a limited ability to resist malicious code generation with an average refusal rate of 40.36% in \textit{text-to-code} scenario and 11.52% in \textit{code-to-code} scenario. The average refusal rate of all LLMs in RMCBench is only 28.71%; ChatGPT-4 has a refusal rate of only 35.73%. We also analyze the factors that affect LLM’s ability to resist malicious code generation and provide implications for developers to enhance model robustness.

Thu 31 Oct

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 16:30
Malicious code and packageResearch Papers / Industry Showcase at Gardenia
Chair(s): Curtis Atkisson UW
15:30
15m
Talk
RMCBench: Benchmarking Large Language Models' Resistance to Malicious Code
Research Papers
Jiachi Chen Sun Yat-sen University, Qingyuan Zhong Sun Yat-sen University, Yanlin Wang Sun Yat-sen University, Kaiwen Ning Sun Yat-sen University, Yongkun Liu Sun Yat-sen University, Zenan Xu Tencent AI Lab, Zhe Zhao Tencent AI Lab, Ting Chen University of Electronic Science and Technology of China, Zibin Zheng Sun Yat-sen University
15:45
15m
Talk
SpiderScan: Practical Detection of Malicious NPM Packages Based on Graph-Based Behavior Modeling and Matching
Research Papers
Yiheng Huang Fudan University, Ruisi Wang Fudan University, Wen Zheng Fudan University, Zhuotong Zhou Fudan University, China, Susheng Wu Fudan University, Shulin Ke Fudan University, Bihuan Chen Fudan University, Shan Gao Huawei, Xin Peng Fudan University
16:00
15m
Talk
1+1>2: Integrating Deep Code Behaviors with Metadata Features for Malicious PyPI Package Detection
Research Papers
Xiaobing Sun Yangzhou University, Xingan Gao Yangzhou University, Sicong Cao Yangzhou University, Lili Bo Yangzhou University, Xiaoxue Wu Yangzhou University, Kaifeng Huang Tongji University
Media Attached
16:15
15m
Talk
Models Are Codes: Towards Measuring Malicious Code Poisoning Attacks on Pre-trained Model Hubs
Industry Showcase
Jian Zhao Huazhong University of Science and Technology, Shenao Wang Huazhong University of Science and Technology, Yanjie Zhao Huazhong University of Science and Technology, Xinyi Hou Huazhong University of Science and Technology, Kailong Wang Huazhong University of Science and Technology, Peiming Gao MYbank, Ant Group, Yuanchao Zhang Mybank, Ant Group, Chen Wei MYbank, Ant Group, Haoyu Wang Huazhong University of Science and Technology