Rachel Bellamy – IBM Research, USA
IBM Research has created several Open Source Trusted AI toolkits for AI practitioners and researchers. For example, AI Fairness 360 (AIF360) is one of the toolkits and is a comprehensive set of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias. The toolkit contains over 30 fairness metrics and 9 state-of-the-art bias mitigation algorithms developed by the research community and is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. In the talk I will introduce the toolkits and discuss the broader considerations of designing AI that can be trusted.
Rachel is a Principal Research Scientist and Chair of the Exploratory Computer Science Council at IBM T J Watson Research Center, Yorktown Heights, New York. Under her management, the Council oversees a research portfolio of exploratory science projects in research areas including Biologically-Inspired Computation, Socio-Technical Systems, Artificial General Intelligence and Distributed Computing. She has done extensive research on human-computer interaction, most recently contributing to IBM Research’s Trusted AI toolkits, including AI Fairness 360, AI Explainability 360 and AI Factsheets. During a career spanning over 30 years, she has collaborated extensively with partners in academia and has mentored many doctoral students and early career researchers. Rachel received her doctorate in cognitive psychology from the University of Cambridge, UK in 1991 and a Bachelor of Science in psychology with mathematics and computer science from University of London in 1986. As a Project Lead at Apple Computer’s Advanced Technology Group, working with Apple Classrooms of Tomorrow, she developed and field-tested some of the earliest examples of on-line, media-rich, collaborative learning environments.