Bias behind the Wheel: Fairness Testing of Autonomous Driving Systems
Autonomous driving systems are on track to become the predominant mode of transportation in the future. These systems are susceptible to software bugs, which can potentially result in severe injuries or even fatalities for both pedestrians and passengers. Extensive research efforts have been devoted to the testing of autonomous driving systems. However, fairness testing for autonomous driving systems remains under investigation in the literature. This article conducts fairness testing of automated pedestrian detection, a crucial but under-explored issue in autonomous driving systems. We evaluate eight state-of-the-art deep learning-based pedestrian detectors across demographic groups on large-scale real-world datasets. To enable thorough fairness testing, we provide extensive annotations for the datasets, resulting in 8,311 images with 16,070 gender labels, 20,115 age labels, and 3,513 skin tone labels. Our findings reveal significant fairness issues, particularly related to age. The proportion of undetected children is 20.14% higher compared to adults. Furthermore, we explore how various driving scenarios affect the fairness of pedestrian detectors. We find that pedestrian detectors demonstrate significant gender biases during night time, potentially exacerbating the prevalent societal issue of female safety concerns during nighttime out. Moreover, we observe that pedestrian detectors can demonstrate both enhanced fairness and superior performance under specific driving conditions, which challenges the fairness-performance trade-off theory widely acknowledged in the fairness literature. We publicly release the code, data, and results to support future research on fairness in autonomous driving.
Tue 24 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
16:00 - 17:40 | Fairness and GreenJournal First / Research Papers / Demonstrations at Aurora A Chair(s): Aldeida Aleti Monash University | ||
16:00 10mTalk | MANILA: A Low-Code Application to Benchmark Machine Learning Models and Fairness-Enhancing Methods Demonstrations Giordano d'Aloisio University of L'Aquila Pre-print Media Attached | ||
16:10 20mTalk | Fairness Testing of Machine Translation Systems Journal First Zeyu Sun Institute of Software, Chinese Academy of Sciences, Zhenpeng Chen Nanyang Technological University, Jie M. Zhang King's College London, Dan Hao Peking University | ||
16:30 20mTalk | Bias behind the Wheel: Fairness Testing of Autonomous Driving Systems Journal First Xinyue Li Peking University, Zhenpeng Chen Nanyang Technological University, Jie M. Zhang King's College London, Federica Sarro University College London, Ying Zhang Peking University, Xuanzhe Liu Peking University | ||
16:50 10mTalk | FAMLEM, the FAst ModuLar Energy Meter at Code Level Demonstrations Max Weber Leipzig University, Johannes Dorn Leipzig University, Sven Apel Saarland University, Norbert Siegmund Leipzig University | ||
17:00 20mTalk | NLP Libraries, Energy Consumption and Runtime - An Empirical Study Research Papers Rajrupa Chattaraj Indian Institute of Technology Tirupati, India, Sridhar Chimalakonda Indian Institute of Technology Tirupati DOI | ||
17:20 20mTalk | An adaptive language-agnostic pruning method for greener language models for code Research Papers Mootez Saad Dalhousie University, José Antonio Hernández López Linköping University, Boqi Chen McGill University, Daniel Varro Linköping University / McGill University, Tushar Sharma Dalhousie University DOI Pre-print | ||
Aurora A is the first room in the Aurora wing.
When facing the main Cosmos Hall, access to the Aurora wing is on the right, close to the side entrance of the hotel.