Machine Learning is a vital part of various modern day decision making software. At the same time, it has shown to exhibit bias, which can cause an unjust treatment of individuals and population groups. One method to achieve fairness in machine learning software is to provide individuals with the same degree of benefit, regardless of sensitive attributes (e.g., students receive the same grade, independent of their sex or race). However, there can be other attributes that one might want to discriminate against (e.g., students with homework should receive higher grades). We will call such attributes anti-protected attributes. When reducing the bias of machine learning software, one risks the loss of discriminatory behaviour of anti-protected attributes. To combat this, we use grid search to show that machine learning software can be debiased (e.g., reduce gender bias) while also improving the ability to discriminate against anti-protected attributes.
Jia Li Peking University, Yongmin Li Peking University, Ge Li Peking University, Xing Hu Zhejiang University, Xin Xia Huawei Software Engineering Application Technology Lab, Zhi Jin Peking University
Xinyi Wang Nanjing University of Aeronautics and Astronautics, Nanjing, China, Paolo Arcaini National Institute of Informatics
, Tao Yue Nanjing University of Aeronautics and Astronautics, Shaukat Ali Simula Research Laboratory, Norway