FairNeuron: Improving Deep Neural Network Fairness with Adversary Games on Selective Neurons
Fri 13 May 2022 04:00 - 04:05 at ICSE room 4-even hours - Software Fairness Chair(s): Aldeida Aleti
With Deep Neural Network (DNN) being integrated into a growing number of critical systems with far-reaching impacts on society, there are increasing concerns on their ethical performance, such as fairness. Unfortunately, model fairness and accuracy in many cases are contradictory goals to optimize. To solve this issue, there has been a number of work trying to improve model fairness by using an adversarial game in model level. This approach introduces an adversary that evaluates the fairness of a model besides its prediction accuracy on the main task, and performs joint-optimization to achieve a balanced result. In this paper, we noticed that when performing backward propagation based training, such contradictory phenomenon has shown on individual neuron level. Based on this observation, we propose FairNeuron, a DNN model automatic repairing tool, to mitigate fairness concerns and balance the accuracy-fairness trade-off without introducing another model. It works on detecting neurons with contradictory optimization directions from accuracy and fairness training goals, and achieving a trade-off by selective dropout. Comparing with state-of-the-art methods, our approach is lightweight, making it scalable and more efficient. Our evaluation on 3 datasets shows that FairNeuron can effectively improve all models’ fairness while maintaining a stable utility.
Mon 9 MayDisplayed time zone: Eastern Time (US & Canada) change
Fri 13 MayDisplayed time zone: Eastern Time (US & Canada) change
04:00 - 05:00 | Software FairnessTechnical Track at ICSE room 4-even hours Chair(s): Aldeida Aleti Monash University | ||
04:00 5mTalk | FairNeuron: Improving Deep Neural Network Fairness with Adversary Games on Selective Neurons Technical Track Xuanqi Gao Xi'an Jiaotong University, Juan Zhai Rutgers University, Shiqing Ma Rutgers University, Chao Shen Xi'an Jiaotong University, Yufei Chen Xi'an Jiaotong University, Qian Wang Wuhan University DOI Pre-print Media Attached | ||
04:05 5mTalk | Training Data Debugging for the Fairness of Machine Learning Software Technical Track Yanhui Li Department of Computer Science and Technology, Nanjing University, Linghan Meng Nanjing University, Lin Chen Department of Computer Science and Technology, Nanjing University, Li Yu Nanjing University, Di Wu Momenta, Yuming Zhou Nanjing University, Baowen Xu Nanjing University Pre-print Media Attached | ||
04:10 5mTalk | NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification Technical Track haibin zheng Zhejiang University of Technology, Zhiqing Chen Zhejiang University of Technology, Tianyu Du Zhejiang University, Xuhong Zhang Zhejiang University, Yao Cheng Huawei International, Shouling Ji Zhejiang University, Jingyi Wang Zhejiang University, Yue Yu College of Computer, National University of Defense Technology, Changsha 410073, China, Jinyin Chen College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China DOI Pre-print Media Attached | ||
04:15 5mTalk | Explanation-Guided Fairness Testing through Genetic Algorithm Technical Track Ming Fan Xi'an Jiaotong University, Wenying Wei Xi'an Jiaotong University, Wuxia Jin Xi'an Jiaotong University, Zijiang Yang Western Michigan University, Ting Liu Xi'an Jiaotong University DOI Pre-print |