MODELS 2024
Sun 22 - Fri 27 September 2024 Linz, Austria

Decision-making systems are prone to discrimination against individuals with regard to protected characteristics such as gender and ethnicity. Detecting and explaining the discriminatory behavior of implemented software is difficult. To avoid the possibility of discrimination from the onset of software development, we propose a model-based methodology called MBFair that allows for verifying UML-based software designs with regard to individual fairness. The verification in MBFair is performed by generating temporal logic clauses, whose verification results enable reporting on the individual fairness of the targeted software. We study the applicability of MBFair using three case studies in real-world settings including a bank services system, a delivery system, and a loan system. We empirically evaluate the necessity of MBFair in a user study and compare it against a baseline scenario in which no modeling and tool support is offered. Our empirical evaluation indicates that analyzing the UML models manually produces unreliable results with a high chance of 46% that analysts overlook true-positive discrimination. We conclude that analysts require support for fairness-related analysis, such as our MBFair methodology.