Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks
The deep feedforward neural networks (DNNs) are increasingly deployed in socioeconomic critical decision support software systems. DNNs are exceptionally good at finding minimal, sufficient statistical patterns within their training data. Consequently, DNNs may learn to encode decisions—amplifying existing biases or introducing new ones—that may disadvantage protected individuals/groups and may stand to violate legal protections. While the existing search based software testing approaches have been effective in discovering fairness defects, they do not supplement these defects with debugging aids—such as severity and causal explanations—crucial to help developers triage and decide on the next course of action. Can we measure the severity of fairness defects in DNNs? Are these defects symptomatic of improper training or they merely reflect biases present in the training data? To answer such questions, we present DICE: an information-theoretic testing and debugging framework to discover and localize fairness defects in DNNs.
The key goal of DICE is to assist software developers in triaging fairness defects by ordering them by their severity. Towards this goal, we quantify fairness in terms of protected information (in bits) used in decision making. A quantitative view of fairness defects not only helps in ordering these defects, our empirical evaluation shows that it improves the search efficiency due to resulting smoothness of the search space. Guided by the quantitative fairness, we present a causal debugging framework to localize inadequately trained layers and neurons responsible for fairness defects. Our experiments over ten DNNs, developed for socially critical tasks, show that DICE efficiently characterizes the amounts of discrimination, effectively generates discriminatory instances (vis-a-vis the state-of-the-art techniques), and localizes layers/neurons with significant biases.
Thu 18 MayDisplayed time zone: Hobart change
13:45 - 15:15 | AI bias and fairnessDEMO - Demonstrations / Technical Track / Journal-First Papers at Meeting Room 104 Chair(s): Amel Bennaceur The Open University, UK | ||
13:45 15mTalk | Towards Understanding Fairness and its Composition in Ensemble Machine Learning Technical Track Usman Gohar Dept. of Computer Science, Iowa State University, Sumon Biswas Carnegie Mellon University, Hridesh Rajan Iowa State University Pre-print | ||
14:00 15mTalk | Fairify: Fairness Verification of Neural Networks Technical Track Pre-print | ||
14:15 15mTalk | Leveraging Feature Bias for Scalable Misprediction Explanation of Machine Learning Models Technical Track Jiri Gesi University of California, Irvine, Xinyun Shen University of California, Irvine, Yunfan Geng University of California, Irvine, Qihong Chen University of California, Irvine, Iftekhar Ahmed University of California at Irvine | ||
14:30 15mTalk | Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks Technical Track Verya Monjezi University of Texas at El Paso, Ashutosh Trivedi University of Colorado Boulder, Gang (Gary) Tan Pennsylvania State University, Saeid Tizpaz-Niari University of Texas at El Paso Pre-print | ||
14:45 7mTalk | Seldonian Toolkit: Building Software with Safe and Fair Machine Learning DEMO - Demonstrations Austin Hoag Berkeley Existential Risk Initiative, James E. Kostas University of Massachusetts, Bruno Castro da Silva University of Massachusetts, Philip S. Thomas University of Massachusetts, Yuriy Brun University of Massachusetts Pre-print Media Attached | ||
14:52 7mTalk | What Would You do? An Ethical AI Quiz DEMO - Demonstrations Wei Teo Monash University, Ze Teoh Monash University, Dayang Abang Arabi Monash University, Morad Aboushadi Monash University, Khairenn Lai Monash University, Zhe Ng Monash University, Aastha Pant Monash Univeristy, Rashina Hoda Monash University, Kla Tantithamthavorn Monash University, Burak Turhan University of Oulu Pre-print Media Attached | ||
15:00 7mTalk | Search-Based Fairness Testing for Regression-Based Machine Learning Systems Journal-First Papers Anjana Perera Oracle Labs, Australia, Aldeida Aleti Monash University, Kla Tantithamthavorn Monash University, Jirayus Jiarpakdee Monash University, Australia, Burak Turhan University of Oulu, Lisa Kuhn Monash University, Katie Walker Monash University Link to publication DOI | ||
15:07 7mTalk | FairMask: Better Fairness via Model-based Rebalancing of Protected Attributes Journal-First Papers Kewen Peng North Carolina State University, Tim Menzies North Carolina State University, Joymallya Chakraborty North Carolina State University Link to publication Pre-print |