ESEIW 2024
Sun 20 - Fri 25 October 2024 Barcelona, Spain

This program is tentative and subject to change.

Thu 24 Oct 2024 12:00 - 12:15 at Room 3 - Software testing

Context: The increasing integration of artificial intelligence and machine learning into software systems has highlighted the critical importance of ensuring fairness in these technologies. Bias in software can lead to inequitable outcomes, making fairness testing essential. However, the current landscape of fairness testing tools remains underexplored, particularly regarding their practical applicability and usability for software development practitioners. Goal: This study aimed to evaluate the practical applicability of existing fairness testing tools for software development practitioners, assessing their usability, documentation, and overall effectiveness in real-world industry settings. Method: We identified 41 fairness testing tools from the literature and conducted a heuristic evaluation and documentary analysis of their installation processes, user interfaces, supporting documentation, and update frequencies. Technical analysis included assessing configurability for diverse datasets. The analysis focused on identifying strengths and deficiencies to determine their suitability for industry use. Findings: Our findings revealed that most fairness testing tools show significant deficiencies, particularly in user-friendliness, detailed documentation, and configurability. These limitations restrict their practical use in industry settings. The tools also lack regular updates and possess a narrow focus on specific datasets, which constrains their versatility and scalability. Despite some strengths, such as cost-effectiveness and compatibility with several environments, the overall landscape of fairness testing tools requires substantial improvements to meet industry needs. Conclusion: There is a pressing need to develop fairness testing tools that align more closely with industry requirements, offering enhanced usability, comprehensive documentation, and greater configurability to effectively support software development practitioners.

This program is tentative and subject to change.

Thu 24 Oct

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

11:00 - 12:30
11:00
20m
Full-paper
Contexts Matter: An Empirical Study on Contextual Influence in Fairness Testing for Deep Learning Systems
ESEM Technical Papers
Chengwen Du University of Birmingham, Tao Chen University of Birmingham
11:20
20m
Full-paper
Automatic Data Labeling for Software Vulnerability Prediction Models: How Far Are We?
ESEM Technical Papers
Triet Le The University of Adelaide, Muhammad Ali Babar School of Computer Science, The University of Adelaide
11:40
20m
Full-paper
Mitigating Data Imbalance for Software Vulnerability Assessment: Does Data Augmentation Help?
ESEM Technical Papers
Triet Le The University of Adelaide, Muhammad Ali Babar School of Computer Science, The University of Adelaide
12:00
15m
Industry talk
From Literature to Practice: Exploring Fairness Testing Tools for the Software Industry Adoption
ESEM IGC
Thanh Nguyen University of Calgary, Maria Teresa Baldassarre Department of Computer Science, University of Bari , Luiz Fernando de Lima , Ronnie de Souza Santos University of Calgary
Pre-print
12:15
15m
Vision and Emerging Results
Do Developers Use Static Application Security Testing (SAST) Tools Straight Out of the Box? A large-scale Empirical Study
ESEM Emerging Results, Vision and Reflection Papers Track
Gareth Bennett Lancaster University, Tracy Hall Lancaster University, Steve Counsell Brunel University London, Emily Winter Lancaster University, Thomas Shippey LogicMonitor