FedSlice: Protecting Federated Learning Models from Malicious Participants with Model Slicing
Federated learning (FL) is a new crowdsourcing development paradigm for the DNN models, which is also called ``software 2.0''. In practice, the privacy of FL can be compromised by many attacks, such as free-rider attacks, adversarial attacks, gradient leakage attacks, and inference attacks. Conventional defensive techniques have low efficiency because they deploy heavy encryption techniques or rely on TEE. To improve the efficiency of protecting FL from the these attacks, this paper proposes FedSlice to prevent malicious participants from getting the whole server-side model while keeping the performance goal of FL. FedSlice breaks the server-side model into several slices and delivers one slice to each participant. Thus, a malicious participant can only get a subset of the server-side model, preventing them from effectively conducting effective attacks. We evaluate FedSlice against these attacks and results show that FedSlice provides effective defense: the server-side model leakage is reduced from 100% to 43.45%, the success rate of adversarial attacks is reduced from 100% to 11.66%, the average accuracy of membership inference is reduced from 71.91% to 51.58%, and the data leakage from shared gradients is reduced to the level of random guesses. Besides, FedSlice only introduces less than 2% accuracy loss and about 14% computation overhead. To the best of our knowledge, this is the first paper to discuss defense methods against these attacks to the FL framework.
Wed 17 MayDisplayed time zone: Hobart change
13:45 - 15:15 | Software security and privacyTechnical Track / Journal-First Papers at Meeting Room 103 Chair(s): Wei Yang University of Texas at Dallas | ||
13:45 15mTalk | BFTDetector: Automatic Detection of Business Flow Tampering for Digital Content Service Technical Track I Luk Kim Purdue University, Weihang Wang University of Southern California, Yonghwi Kwon University of Virginia, Xiangyu Zhang Purdue University | ||
14:00 15mTalk | FedSlice: Protecting Federated Learning Models from Malicious Participants with Model Slicing Technical Track Ziqi Zhang Peking University, Yuanchun Li Institute for AI Industry Research (AIR), Tsinghua University, Bingyan Liu Peking University, Yifeng Cai Peking University, Ding Li Peking University, Yao Guo Peking University, Xiangqun Chen Peking University | ||
14:15 15mTalk | PTPDroid: Detecting Violated User Privacy Disclosures to Third-Parties of Android Apps Technical Track Zeya Tan Nanjing University of Science and Technology, Wei Song Nanjing University of Science and Technology Pre-print | ||
14:30 15mTalk | AdHere: Automated Detection and Repair of Intrusive Ads Technical Track Yutian Yan University of Southern California, Yunhui Zheng , Xinyue Liu University at Buffalo, SUNY, Nenad Medvidović University of Southern California, Weihang Wang University of Southern California | ||
14:45 15mTalk | Bad Snakes: Understanding and Improving Python Package Index Malware Scanning Technical Track | ||
15:00 7mTalk | DAISY: Dynamic-Analysis-Induced Source Discovery for Sensitive Data Journal-First Papers Xueling Zhang Rochester Institute of Technology, John Heaps University of Texas at San Antonio, Rocky Slavin The University of Texas at San Antonio, Jianwei Niu University of Texas at San Antonio, Travis Breaux Carnegie Mellon University, Xiaoyin Wang University of Texas at San Antonio | ||
15:07 7mTalk | Assessing the opportunity of combining state-of-the-art Android malware detectors Journal-First Papers Nadia Daoudi SnT, University of Luxembourg, Kevin Allix CentraleSupelec Rennes, Tegawendé F. Bissyandé SnT, University of Luxembourg, Jacques Klein University of Luxembourg |