Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of Robustness
Federated learning (FL) is a distributed learning paradigm that preserves users’ data privacy while leveraging the entire dataset of all participants. In FL, multiple models are trained independently on the clients and aggregated centrally to update a global model in an iterative process. Although this approach is excellent at preserving privacy, FL still suffers from quality issues such as attacks or byzantine faults. Recent attempts have been made to address such quality challenges on the robust aggregation techniques for FL. However, the effectiveness of state-of-the-art (SOTA) robust FL techniques is still unclear and lacks a comprehensive study. Therefore, to better understand the current quality status and challenges of these SOTA FL techniques in the presence of attacks and faults, we perform a large-scale empirical study to investigate the SOTA FL’s quality from multiple angles of attacks, simulated faults (via mutation operators), and aggregation (defense) methods. In particular, we study FL’s performance on the image classification tasks and use Deep Neural Networks as our model type. Furthermore, we perform our study on two generic image datasets and one real-world federated medical image dataset. We also systematically investigate the effect of the proportion of affected clients and the dataset distribution factors on the robustness of FL. After a large-scale analysis with 496 configurations, we find that most mutators on each user have a negligible effect on the final model in the generic datasets, and only one of them is effective in the medical dataset. Furthermore, we show that model poisoning attacks are more effective than data poisoning attacks. Moreover, choosing the most robust FL aggregator depends on the attacks and datasets. Finally, we illustrate that a simple ensemble of aggregators achieves a more robust solution than any single aggregator and is the best choice in 75% of the cases. Our replication package is available online: https://github.com/aminesi/federated.
Wed 17 MayDisplayed time zone: Hobart change
13:45 - 15:15 | AI systems engineeringSEIP - Software Engineering in Practice / Technical Track / NIER - New Ideas and Emerging Results / Journal-First Papers at Meeting Room 104 Chair(s): Xin Peng Fudan University | ||
13:45 15mTalk | FedDebug: Systematic Debugging for Federated Learning Applications Technical Track | ||
14:00 15mTalk | Practical and Efficient Model Extraction of Sentiment Analysis APIs Technical Track Weibin Wu Sun Yat-sen University, Jianping Zhang The Chinese University of Hong Kong, Victor Junqiu Wei The Hong Kong Polytechnic University, Xixian Chen Tencent, Zibin Zheng School of Software Engineering, Sun Yat-sen University, Irwin King The Chinese University of Hong Kong, Michael Lyu The Chinese University of Hong Kong | ||
14:15 15mTalk | CrossCodeBench: Benchmarking Cross-Task Generalization of Source Code Models Technical Track Changan Niu Software Institute, Nanjing University, Chuanyi Li Nanjing University, Vincent Ng Human Language Technology Research Institute, University of Texas at Dallas, Richardson, TX 75083-0688, Bin Luo Nanjing University Pre-print | ||
14:30 15mTalk | Challenges in Adopting Artificial Intelligence Based User Input Verification Framework in Reporting Software Systems SEIP - Software Engineering in Practice Dong Jae Kim Concordia University, Tse-Hsun (Peter) Chen Concordia University, Steve Sporea , Andrei Toma ERA Environmental Management Solutions, Laura Weinkam , Sarah Sajedi ERA Environmental Management Solutions, Steve Sporea | ||
14:45 7mTalk | Towards Understanding Quality Challenges of the Federated Learning for Neural Networks: A First Look from the Lens of Robustness Journal-First Papers Amin Eslami Abyane University of Calgary, Derui Zhu Technical University of Munich, Roberto Souza University of Calgary, Lei Ma University of Alberta, Hadi Hemmati York University | ||
14:52 7mTalk | An Empirical Study of the Impact of Hyperparameter Tuning and Model Optimization on the Performance Properties of Deep Neural Networks Journal-First Papers Lizhi Liao Concordia University, Heng Li Polytechnique Montréal, Weiyi Shang University of Waterloo, Lei Ma University of Alberta | ||
15:00 7mTalk | Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering Journal-First Papers Mohammed Attaoui University of Luxembourg, Hazem FAHMY University of Luxembourg, Fabrizio Pastore University of Luxembourg, Lionel Briand University of Luxembourg; University of Ottawa Link to publication Pre-print | ||
15:07 7mTalk | Iterative Assessment and Improvement of DNN Operational Accuracy NIER - New Ideas and Emerging Results Antonio Guerriero Università di Napoli Federico II, Roberto Pietrantuono Università di Napoli Federico II, Stefano Russo Università di Napoli Federico II Pre-print |