ModelDiff: Testing-Based DNN Similarity Comparison for Model Reuse Detection
Sat 17 Jul 2021 09:00 - 09:20 at ISSTA 2 - Session 26 (time band 3) Testing Deep Learning Systems 5 Chair(s): Junjie Chen
The knowledge of a deep learning model may be transferred to a student model, leading to intellectual property infringement or vulnerability propagation. Detecting such knowledge reuse is nontrivial because the suspect models may not be white-box accessible and/or may serve different tasks.
In this paper, we propose ModelDiff, a testing-based approach to deep learning model similarity comparison. Instead of directly comparing the weights, activations, or outputs of two models, we compare their behavioral patterns on the same set of test inputs. Specifically, the behavioral pattern of a model is represented as a decision distance vector (DDV), in which each element is the distance between the model's reactions to a pair of inputs. The knowledge similarity between two models is measured with the cosine similarity between their DDVs. To evaluate ModelDiff, we created a benchmark that contains 144 pairs of models that cover most popular model reuse methods, including transfer learning, model compression, and model stealing. Our method achieved 91.7% correctness on the benchmark, which demonstrates the effectiveness of using ModelDiff for model reuse detection. A study on mobile deep learning apps has shown the feasibility of ModelDiff on real-world models.
Fri 16 JulDisplayed time zone: Brussels, Copenhagen, Madrid, Paris change
02:00 - 03:20 | Session 13 (time band 2) Testing Deep Learning Systems 4Technical Papers at ISSTA 1 Chair(s): Shiqing Ma Rutgers University | ||
02:00 20mTalk | Efficient White-Box Fairness Testing through Gradient Search Technical Papers Lingfeng Zhang East China Normal University, Yueling Zhang Singapore Management University, Min Zhang East China Normal University DOI Media Attached | ||
02:20 20mTalk | DialTest: Automated Testing for Recurrent-Neural-Network-Driven Dialogue Systems Technical Papers DOI | ||
02:40 20mTalk | AdvDoor: Adversarial Backdoor Attack of Deep Learning System Technical Papers Quan Zhang Tsinghua University, Yifeng Ding Tsinghua University, Yongqiang Tian Tianjin University, Jianmin Guo Tsinghua University, Min Yuan WeBank, Yu Jiang Tsinghua University DOI | ||
03:00 20mTalk | ModelDiff: Testing-Based DNN Similarity Comparison for Model Reuse Detection Technical Papers Yuanchun Li Microsoft Research, Ziqi Zhang Peking University, Bingyan Liu Peking University, Ziyue Yang Microsoft Research, Yunxin Liu Tsinghua University DOI |
Sat 17 JulDisplayed time zone: Brussels, Copenhagen, Madrid, Paris change
08:00 - 09:20 | Session 26 (time band 3) Testing Deep Learning Systems 5Technical Papers at ISSTA 2 Chair(s): Junjie Chen Tianjin University | ||
08:00 20mTalk | Efficient White-Box Fairness Testing through Gradient Search Technical Papers Lingfeng Zhang East China Normal University, Yueling Zhang Singapore Management University, Min Zhang East China Normal University DOI Media Attached | ||
08:20 20mTalk | DialTest: Automated Testing for Recurrent-Neural-Network-Driven Dialogue Systems Technical Papers DOI | ||
08:40 20mTalk | AdvDoor: Adversarial Backdoor Attack of Deep Learning System Technical Papers Quan Zhang Tsinghua University, Yifeng Ding Tsinghua University, Yongqiang Tian Tianjin University, Jianmin Guo Tsinghua University, Min Yuan WeBank, Yu Jiang Tsinghua University DOI | ||
09:00 20mTalk | ModelDiff: Testing-Based DNN Similarity Comparison for Model Reuse Detection Technical Papers Yuanchun Li Microsoft Research, Ziqi Zhang Peking University, Bingyan Liu Peking University, Ziyue Yang Microsoft Research, Yunxin Liu Tsinghua University DOI |