Fri 8 Dec 2023 16:00 - 16:30 at Foothill G - Fairness and Privacy

The issue of fairness testing in machine learning models has become popular due to rising concerns about potential bias and discrimination, as these models continue to permeate end-user applications. However, achieving an accurate and reliable measurement of the fairness performance of machine learning models remains a substantial challenge. Representative sampling plays a pivotal role in ensuring accurate fairness assessments and providing insight into the underlying dynamics of data, unlike biased or random sampling approaches. In our study, we introduce our approach, namely RSFair, which adopts the representative sampling method to comprehensively evaluate the fairness performance of a trained machine learning model. Our research findings on two datasets indicate that RSFair yields more accurate and reliable results, thus improving the efficiency of subsequent search steps, and ultimately the fairness performance of the model. With the usage of Orthogonal Matching Pursuit (OMP) and K-Singular Value Decomposition (K-SVD) algorithms for representative sampling, RSFair significantly improves the detection of discriminatory inputs by 76% and the fairness performance by 53% compared to other search-based approaches in the literature.

Fri 8 Dec

Displayed time zone: Pacific Time (US & Canada) change

16:00 - 17:00
Fairness and PrivacyPROMISE 2023 at Foothill G
16:00
30m
Paper
Automated Fairness Testing with Representative Sampling
PROMISE 2023
Umutcan Karakaş Istanbul Technical University, Ayse Tosun Istanbul Technical University
DOI
16:30
30m
Paper
Model Review: A PROMISEing Opportunity
PROMISE 2023
Tim Menzies North Carolina State University
DOI Pre-print