ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States

This program is tentative and subject to change.

Thu 31 Oct 2024 16:15 - 16:30 at Gardenia - Malicious code and package

The proliferation of pre-trained models (PTMs) and datasets has led to the emergence of centralized model hubs like Hugging Face, which facilitate collaborative development and reuse. However, recent security reports have uncovered vulnerabilities and instances of malicious attacks within these platforms, highlighting growing security concerns. This paper presents the first systematic study of malicious code poisoning attacks on pre-trained model hubs, focusing on the Hugging Face platform. We conduct a comprehensive threat analysis, develop a taxonomy of model formats, and perform root cause analysis of vulnerable formats. While existing tools like Fickling and ModelScan offer some protection, they face limitations in semantic-level analysis and comprehensive threat detection. To address these challenges, we propose MalHug, an end-to-end pipeline tailored for Hugging Face that combines dataset loading script extraction, model deserialization, in-depth taint analysis, and heuristic pattern matching to detect and classify malicious code poisoning attacks in datasets and models. In partnership with Ant Group, a leading financial technology company, we have implemented and deployed MalHug in a real-world industrial environment. It has been operational for over three months on a mirrored Hugging Face instance within Ant Group’s infrastructure, demonstrating its effectiveness and scalability in a large-scale industrial setting. During this period, MalHug has monitored more than 705K models and 176K datasets, uncovering 264 malicious models and 9 malicious dataset loading scripts. These findings reveal a range of security threats, including reverse shell, browser credential theft, and system reconnaissance. This work not only bridges a critical gap in understanding the security of the PTM supply chain but also provides a practical, industry-tested solution for enhancing the security of pre-trained model hubs.

This program is tentative and subject to change.

Thu 31 Oct

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 16:30
Malicious code and packageResearch Papers / Industry Showcase at Gardenia
15:30
15m
Talk
RMCBench: Benchmarking Large Language Models' Resistance to Malicious Code
Research Papers
Jiachi Chen Sun Yat-sen University, Qingyuan Zhong Sun Yat-sen University, Yanlin Wang Sun Yat-sen University, Kaiwen Ning Sun Yat-sen University, Yongkun Liu Sun Yat-sen University, Zenan Xu Tencent AI Lab, Zhe Zhao Tencent AI Lab, Ting Chen University of Electronic Science and Technology of China, Zibin Zheng Sun Yat-sen University
15:45
15m
Talk
SpiderScan: Practical Detection of Malicious NPM Packages Based on Graph-Based Behavior Modeling and Matching
Research Papers
Yiheng Huang Fudan University, Ruisi Wang Fudan University, Wen Zheng Fudan University, Zhuotong Zhou Fudan University, China, Susheng Wu Fudan University, Shulin Ke Fudan University, Bihuan Chen Fudan University, Shan Gao Huawei, Xin Peng Fudan University
16:00
15m
Talk
1+1>2: Integrating Deep Code Behaviors with Metadata Features for Malicious PyPI Package Detection
Research Papers
Xiaobing Sun Yangzhou University, Xingan Gao Yangzhou University, Sicong Cao Yangzhou University, Lili Bo Yangzhou University, Xiaoxue Wu Yangzhou University, Kaifeng Huang Tongji University
16:15
15m
Talk
Models Are Codes: Towards Measuring Malicious Code Poisoning Attacks on Pre-trained Model Hubs
Industry Showcase
Jian Zhao Huazhong University of Science and Technology, Shenao Wang Huazhong University of Science and Technology, Yanjie Zhao Huazhong University of Science and Technology, Xinyi Hou Huazhong University of Science and Technology, Kailong Wang Huazhong University of Science and Technology, Peiming Gao MYbank, Ant Group, Yuanchao Zhang Mybank, Ant Group, Chen Wei MYbank, Ant Group, Haoyu Wang Huazhong University of Science and Technology