MODELS 2022
Sun 23 - Fri 28 October 2022 Montréal, Canada
Thu 27 Oct 2022 16:37 - 17:00 at A-3502.1 - Applications II Chair(s): Judith Michael

Models produced by machine learning are not guaranteed to be free from bias, particularly when trained and tested with data produced in discriminatory environments. The bias can be unethical, mainly when the data contains sensitive attributes, such as sex, race, age, etc. Some approaches have contributed to mitigating such biases by providing bias metrics and mitigation algorithms. The challenge is that users have to implement their code in general/statistical programming languages, which can be demanding for users with little programming and fairness in machine learning experience. We present FairML, a model-based approach to facilitate bias measurement and mitigation with reduced software development effort. Our evaluation shows that FairML requires fewer lines of code to produce comparable measurement values to the ones made by the baseline code.

Thu 27 Oct

Displayed time zone: Eastern Time (US & Canada) change

15:30 - 17:00
15:30
22m
Talk
Digital Twin as Risk Free Experimentation Aid for Techno-socio-economic SystemsP&I
Technical Track
Souvik Barat Tata Consultancy Services Research, Vinay Kulkarni Tata Consultancy Services Research, Tony Clark Aston University, Balbir Barn Middlesex University, UK
15:52
22m
Talk
Digital TwinCity: A Holistic Approach towards Comparative Analysis of Business ProcessesDemo
Tools & Demonstrations
16:15
22m
Talk
Facilitating the migration to the microservice architecture via model-driven reverse engineering and reinforcement learningJ1st
Journal-first
Shekoufeh Rahimi University of Isfahan, MohammadHadi Dehghani Johannes Kepler University Linz, Massimo Tisi IMT Atlantique, LS2N (UMR CNRS 6004), Dalila Tamzalit
Link to publication
16:37
22m
Talk
Towards Model-based Bias Mitigation in Machine LearningVirtualP&I
Technical Track
Alfa Yohannis University of York, Universitas Pradita, Dimitris Kolovos University of York