Write a Blog >>
ICSE 2023
Sun 14 - Sat 20 May 2023 Melbourne, Australia
Thu 18 May 2023 15:00 - 15:07 at Meeting Room 104 - AI bias and fairness Chair(s): Amel Bennaceur

Context: Machine learning (ML) software systems are permeating many aspects of our life, such as healthcare, transportation, banking, and recruitment. These systems are trained with data that is often biased, resulting in biased behaviour. To address this issue, fairness testing approaches have been proposed to test ML systems for fairness, which predominantly focus on assessing classification-based ML systems. These methods are not applicable to regression-based systems, for example, they do not quantify the magnitude of the disparity in predicted outcomes, which we identify as important in the context of regression-based ML systems.

Method: We conduct this study as design science research. We identify the problem instance in the context of emergency department (ED) wait-time prediction. In this paper, we develop an effective and efficient fairness testing approach to evaluate the fairness of regression-based ML systems. We propose fairness degree, which is a new fairness measure for regression-based ML systems, and a novel search-based fairness testing (SBFT) approach for testing regression-based machine learning systems. We apply the proposed solutions to ED wait-time prediction software.

Results: We experimentally evaluate the effectiveness and efficiency of the proposed approach with ML systems trained on real observational data from the healthcare domain. We demonstrate that SBFT significantly outperforms existing fairness testing approaches, with up to 111% and 190% increase in effectiveness and efficiency of SBFT compared to the best performing existing approaches.

Conclusion: These findings indicate that our novel fairness measure and the new approach for fairness testing of regression-based ML systems can identify the degree of fairness in predictions, which can help software teams to make data-informed decisions about whether such software systems are ready to deploy. The scientific knowledge gained from our work can be phrased as a technological rule; to measure the fairness of the regression-based ML systems in the context of emergency department wait-time prediction use fairness degree and search-based techniques to approximate it.

Thu 18 May

Displayed time zone: Hobart change

13:45 - 15:15
AI bias and fairnessDEMO - Demonstrations / Technical Track / Journal-First Papers at Meeting Room 104
Chair(s): Amel Bennaceur The Open University, UK
13:45
15m
Talk
Towards Understanding Fairness and its Composition in Ensemble Machine Learning
Technical Track
Usman Gohar Dept. of Computer Science, Iowa State University, Sumon Biswas Carnegie Mellon University, Hridesh Rajan Iowa State University
Pre-print
14:00
15m
Talk
Fairify: Fairness Verification of Neural Networks
Technical Track
Sumon Biswas Carnegie Mellon University, Hridesh Rajan Iowa State University
Pre-print
14:15
15m
Talk
Leveraging Feature Bias for Scalable Misprediction Explanation of Machine Learning Models
Technical Track
Jiri Gesi University of California, Irvine, Xinyun Shen University of California, Irvine, Yunfan Geng University of California, Irvine, Qihong Chen University of California, Irvine, Iftekhar Ahmed University of California at Irvine
14:30
15m
Talk
Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks
Technical Track
Verya Monjezi University of Texas at El Paso, Ashutosh Trivedi University of Colorado Boulder, Gang Tan Pennsylvania State University, Saeid Tizpaz-Niari University of Texas at El Paso
Pre-print
14:45
7m
Talk
Seldonian Toolkit: Building Software with Safe and Fair Machine Learning
DEMO - Demonstrations
Austin Hoag Berkeley Existential Risk Initiative, James E. Kostas University of Massachusetts, Bruno Castro da Silva University of Massachusetts, Philip S. Thomas University of Massachusetts, Yuriy Brun University of Massachusetts
Pre-print Media Attached
14:52
7m
Talk
What Would You do? An Ethical AI Quiz
DEMO - Demonstrations
Wei Teo Monash University, Ze Teoh Monash University, Dayang Abang Arabi Monash University, Morad Aboushadi Monash University, Khairenn Lai Monash University, Zhe Ng Monash University, Aastha Pant Monash Univeristy, Rashina Hoda Monash University, Kla Tantithamthavorn Monash University, Burak Turhan University of Oulu
Pre-print Media Attached
15:00
7m
Talk
Search-Based Fairness Testing for Regression-Based Machine Learning Systems
Journal-First Papers
Anjana Perera Oracle Labs, Australia, Aldeida Aleti Monash University, Kla Tantithamthavorn Monash University, Jirayus Jiarpakdee Monash University, Australia, Burak Turhan University of Oulu, Lisa Kuhn Monash University, Katie Walker Monash University
Link to publication DOI
15:07
7m
Talk
FairMask: Better Fairness via Model-based Rebalancing of Protected Attributes
Journal-First Papers
Kewen Peng North Carolina State University, Tim Menzies North Carolina State University, Joymallya Chakraborty North Carolina State University
Link to publication Pre-print