Tue 16 Jul 2024 16:45 - 17:00 at Acerola - Afternoon session 2

This paper investigates the relationships between hyperparameters of machine learning and fairness. Data-driven solutions are increasingly used in critical socio-technical applications where ensuring fairness is important.

Rather than explicitly encoding decision logic via control and data structures, the ML developers provide input data, perform some pre-processing, choose ML algorithms, and tune hyperparameters (HPs) to infer a program that encodes the decision logic.

Prior works report that the selection of HPs can significantly influence fairness.

However, tuning HPs to find an ideal trade-off between accuracy, precision, and fairness

has remained an expensive and tedious task. Can we predict the fairness of HP configuration for a given dataset? Are the predictions robust to distribution shifts?

We focus on group fairness notions and investigate the HP space of 5 training algorithms. We first find that tree regressors and XGBoots significantly outperformed deep neural networks and support vector machines in accurately predicting the fairness of HPs. When predicting the fairness of ML hyperparameters under temporal distribution shift, the tree regressors outperform the other algorithms with reasonable accuracy. However, the precision depends on the ML training algorithm, dataset, and protected attributes. For example, the tree regressor model was robust for training data shift from 2014 to 2018 on logistic regression and discriminant analysis HPs with sex as the protected attribute; but not for race and other training algorithms. Our method provides a sound framework to efficiently perform fine-tuning of ML training algorithms and understand the relationships between HPs and fairness.

Tue 16 Jul

Displayed time zone: Brasilia, Distrito Federal, Brazil change

16:00 - 18:00
Afternoon session 2PROMISE 2024 at Acerola
16:00
15m
Talk
MoreFixes: A Large-Scale Dataset of CVE Fix Commits Mined through Enhanced Repository Discovery
PROMISE 2024
Jafar Akhoundali Leiden University, Sajad Rahim Nouri Islamic Azad University of Ramsar, Kristian Rietveld Leiden University, Olga Gadyatskaya
DOI
16:15
15m
Talk
A Pilot Study in Surveying Data Challenges of Automatic Software Engineering Tasks
PROMISE 2024
Liming Dong CSIRO’s Data61, Qinghua Lu Data61, CSIRO, Liming Zhu CSIRO’s Data61
DOI
16:30
15m
Talk
Prioritising GitHub Priority Labels
PROMISE 2024
James Caddy University of Adelaide, Christoph Treude Singapore Management University
DOI
16:45
15m
Talk
Predicting Fairness of ML Software Configurations
PROMISE 2024
Salvador Robles Herrera University of Texas at El Paso, Verya Monjezi University of Texas at El Paso, Vladik Kreinovich University of Texas at El Paso, Ashutosh Trivedi University of Colorado Boulder, Saeid Tizpaz-Niari University of Texas at El Paso
DOI
17:00
5m
Day closing
Closing
PROMISE 2024