Towards Model-based Bias Mitigation in Machine LearningVirtualP&I
Models produced by machine learning are not guaranteed to be free from bias, particularly when trained and tested with data produced in discriminatory environments. The bias can be unethical, mainly when the data contains sensitive attributes, such as sex, race, age, etc. Some approaches have contributed to mitigating such biases by providing bias metrics and mitigation algorithms. The challenge is that users have to implement their code in general/statistical programming languages, which can be demanding for users with little programming and fairness in machine learning experience. We present FairML, a model-based approach to facilitate bias measurement and mitigation with reduced software development effort. Our evaluation shows that FairML requires fewer lines of code to produce comparable measurement values to the ones made by the baseline code.
Thu 27 OctDisplayed time zone: Eastern Time (US & Canada) change
15:30 - 17:00 | Applications IITools & Demonstrations / Technical Track / Journal-first at A-3502.1 Chair(s): Wrong conf.researchr.org Account | ||
15:30 22mTalk | Digital Twin as Risk Free Experimentation Aid for Techno-socio-economic SystemsP&I Technical Track Souvik Barat Tata Consultancy Services Research, Vinay Kulkarni Tata Consultancy Services Research, Tony Clark Aston University, Balbir Barn Middlesex University, UK | ||
15:52 22mTalk | Digital TwinCity: A Holistic Approach towards Comparative Analysis of Business ProcessesDemo Tools & Demonstrations Shinobu Saito NTT | ||
16:15 22mTalk | Facilitating the migration to the microservice architecture via model-driven reverse engineering and reinforcement learningJ1st Journal-first Shekoufeh Rahimi University of Isfahan, MohammadHadi Dehghani Johannes Kepler University Linz, Massimo Tisi IMT Atlantique, LS2N (UMR CNRS 6004), Dalila Tamzalit Link to publication | ||
16:37 22mTalk | Towards Model-based Bias Mitigation in Machine LearningVirtualP&I Technical Track |