ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States

This program is tentative and subject to change.

Thu 31 Oct 2024 16:00 - 16:15 at Magnoila - SE for AI 3

Deploying deep learning (DL) models on mobile applications (Apps) has become ever-more popular. However, existing studies show attackers can easily reverse-engineer mobile DL models in Apps to steal intellectual property or generate effective attacks. A recent approach, Model Obfuscation, has been proposed to defend against such reverse engineering by obfuscating DL model representations, such as weights and computational graphs, without affecting model performance. These existing model obfuscation methods use static methods to obfuscate the model representation, or they use half-dynamic methods but require users to restore the model information through additional input arguments. However, these static methods or half-dynamic methods cannot provide enough protection for on-device DL models. Attackers can use dynamic analysis to mine the sensitive information in the inference codes as the correct model information and intermediate results must be recovered at runtime for static and half-dynamic obfuscation methods. We assess the vulnerability of the existing obfuscation strategies using an instrumentation method and tool, \textbf{\textit{DLModelExplorer}}, that dynamically extracts correct sensitive model information (i.e., weights, computational graph) at runtime. Experiments show it achieves very high attack performance (e.g., 98.76% of weights extraction rate and 99.89% of obfuscating operator classification rate). To defend against such attacks based on dynamic instrumentation, we propose \textbf{\textit{DynaMO}}, a Dynamic Model Obfuscation strategy similar to Homomorphic Encryption. The obfuscation and recovery process can only be done through simple linear transformation for the weights of randomly coupled eligible operators, which is a fully dynamic obfuscation strategy. Experiments show that our proposed strategy can dramatically improve model security compared with the existing obfuscation strategies, with only negligible overheads for model inference. Our prototype tool is publicly available at https://github.com/AnonymousAuthor000/code112.

This program is tentative and subject to change.

Thu 31 Oct

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 16:30
15:30
15m
Talk
DevMuT: Testing Deep Learning Framework via Developer Expertise-Based Mutation
Research Papers
Yanzhou Mu , Juan Zhai University of Massachusetts at Amherst, Chunrong Fang Nanjing University, Xiang Chen Nantong University, Zhixiang Cao Xi'an Jiaotong University, Peiran Yang State Key Laboratory for Novel Software Technology, Nanjing University, China, Yinglong Zou Nanjing University, Tao Zheng Nanjing University, Zhenyu Chen Nanjing University
15:45
15m
Talk
Mutation-Based Deep Learning Framework Testing Method in JavaScript Environment
Research Papers
Yinglong Zou Nanjing University, Juan Zhai University of Massachusetts at Amherst, Chunrong Fang Nanjing University, Jiawei Liu University of Illinois at Urbana-Champaign, Tao Zheng Nanjing University, Zhenyu Chen Nanjing University
16:00
15m
Talk
DynaMO: Protecting Mobile DL Models through Coupling Obfuscated DL Operators
Research Papers
Mingyi Zhou Monash University, Xiang Gao Beihang University, Xiao Chen University of Newcastle, Chunyang Chen TU Munich, John Grundy Monash University, Li Li Beihang University
16:15
15m
Talk
GlitchProber: Advancing Effective Detection and Mitigation of Glitch Tokens in Large Language Models
Research Papers
Zhibo Zhang Huazhong University of Science and Technology, Wuxia Bai Huazhong University of Science and Technology, Yuxi Li Huazhong University of Science and Technology, Mark Huasong Meng National University of Singapore, Kailong Wang Huazhong University of Science and Technology, Ling Shi Nanyang Technological University, Li Li Beihang University, Jun Wang Post Luxembourg, Haoyu Wang Huazhong University of Science and Technology