Leveraging Feature Bias for Scalable Misprediction Explanation of Machine Learning Models
Interpreting and debugging machine learning models is necessary to ensure the robustness of the machine learning models. Explaining mispredictions can help significantly in doing so. While recent works on misprediction explanation have proven promising in generating interpretable explanations for mispredictions, the state-of-the-art techniques ``blindly" deduce misprediction explanation rules from all data features, which may not be scalable depending on the number of features. To alleviate this problem, we propose an efficient misprediction explanation technique named Bias Guided Misprediction Diagnoser (BGMD), which leverages two prior knowledge about data: a) data often exhibit highly-skewed feature distributions and b) trained models in many cases perform poorly on subdataset with under-represented features. Next, we propose a technique named MAPS (Mispredicted Area UPweight Sampling). MAPS increases the weights of subdataset during model retraining that belong to the group that is prone to be mispredicted because of containing under-represented features. Thus, MAPS make retrained model pay more attention to the under-represented features. Our empirical study shows that our proposed BGMD outperformed the state-of-the-art misprediction diagnoser and reduces diagnosis time by 92%. Furthermore, MAPS outperformed two state-of-the-art techniques on fixing the machine learning model’s performance on mispredicted data without compromising performance on all data. All the research artifacts (i.e., tools, scripts, and data) of this study are available in the accompanying website.
Thu 18 MayDisplayed time zone: Hobart change
13:45 - 15:15 | AI bias and fairnessDEMO - Demonstrations / Technical Track / Journal-First Papers at Meeting Room 104 Chair(s): Amel Bennaceur The Open University, UK | ||
13:45 15mTalk | Towards Understanding Fairness and its Composition in Ensemble Machine Learning Technical Track Usman Gohar Dept. of Computer Science, Iowa State University, Sumon Biswas Carnegie Mellon University, Hridesh Rajan Iowa State University Pre-print | ||
14:00 15mTalk | Fairify: Fairness Verification of Neural Networks Technical Track Pre-print | ||
14:15 15mTalk | Leveraging Feature Bias for Scalable Misprediction Explanation of Machine Learning Models Technical Track Jiri Gesi University of California, Irvine, Xinyun Shen University of California, Irvine, Yunfan Geng University of California, Irvine, Qihong Chen University of California, Irvine, Iftekhar Ahmed University of California at Irvine | ||
14:30 15mTalk | Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks Technical Track Verya Monjezi University of Texas at El Paso, Ashutosh Trivedi University of Colorado Boulder, Gang (Gary) Tan Pennsylvania State University, Saeid Tizpaz-Niari University of Texas at El Paso Pre-print | ||
14:45 7mTalk | Seldonian Toolkit: Building Software with Safe and Fair Machine Learning DEMO - Demonstrations Austin Hoag Berkeley Existential Risk Initiative, James E. Kostas University of Massachusetts, Bruno Castro da Silva University of Massachusetts, Philip S. Thomas University of Massachusetts, Yuriy Brun University of Massachusetts Pre-print Media Attached | ||
14:52 7mTalk | What Would You do? An Ethical AI Quiz DEMO - Demonstrations Wei Teo Monash University, Ze Teoh Monash University, Dayang Abang Arabi Monash University, Morad Aboushadi Monash University, Khairenn Lai Monash University, Zhe Ng Monash University, Aastha Pant Monash Univeristy, Rashina Hoda Monash University, Kla Tantithamthavorn Monash University, Burak Turhan University of Oulu Pre-print Media Attached | ||
15:00 7mTalk | Search-Based Fairness Testing for Regression-Based Machine Learning Systems Journal-First Papers Anjana Perera Oracle Labs, Australia, Aldeida Aleti Monash University, Kla Tantithamthavorn Monash University, Jirayus Jiarpakdee Monash University, Australia, Burak Turhan University of Oulu, Lisa Kuhn Monash University, Katie Walker Monash University Link to publication DOI | ||
15:07 7mTalk | FairMask: Better Fairness via Model-based Rebalancing of Protected Attributes Journal-First Papers Kewen Peng North Carolina State University, Tim Menzies North Carolina State University, Joymallya Chakraborty North Carolina State University Link to publication Pre-print |