Metamorphic Object Insertion for Testing Object Detection Systems
Recent advances in deep neural networks (DNNs) have led to object detectors (ODs) that can rapidly process pictures or videos, and recognize the objects that they contain. Despite the promising progress by industrial manufacturers such as Amazon and Google in commercializing deep learning-based ODs as a standard computer vision service, ODs — similar to traditional software — may still produce incorrect results. These errors, in turn, can lead to severe negative outcomes for the users. For instance, an autonomous driving system that fails to detect pedestrians can cause accidents or even fatalities. However, despite their importance, principled, systematic methods for testing ODs do not yet exist.
To fill this critical gap, we introduce the design and realization of MetaOD, a metamorphic testing system specifically designed for ODs to effectively uncover erroneous detection results. To this end, we (1) synthesize natural-looking images by inserting extra object instances into background images, and (2) design metamorphic conditions asserting the equivalence of OD results between the original and synthetic images after excluding the prediction results on the inserted objects. MetaOD is designed as a streamlined workflow that performs object extraction, selection, and insertion. We develop a set of practical techniques to realize an effective workflow, and generate diverse, natural-looking images for testing. Evaluated on four commercial OD services and four pretrained models provided by the TensorFlow API, MetaOD found tens of thousands of detection failures. To further demonstrate the practical usage of MetaOD, we use the synthetic images that cause erroneous detection results to retrain the model. Our results show that the model performance is significantly increased, from an mAP score of 9.3 to an mAP score of 10.5.
Thu 24 SepDisplayed time zone: (UTC) Coordinated Universal Time change
09:10 - 10:10 | Testing and AIResearch Papers / Journal-first Papers at Koala Chair(s): Xiaoyuan Xie School of Computer Science, Wuhan University, China | ||
09:10 20mTalk | Predicting failures in multi-tier distributed systems Journal-first Papers Leonardo Mariani University of Milano Bicocca, Mauro Pezze USI Lugano, Switzerland, Oliviero Riganelli University of Milano-Bicocca, Italy, Rui Xin USI Università della Svizzera italiana | ||
09:30 20mTalk | Cats Are Not Fish: Deep Learning Testing Calls for Out-Of-Distribution Awareness Research Papers David Berend Nanyang Technological University, Singapore, Xiaofei Xie Nanyang Technological University, Lei Ma Kyushu University, Lingjun Zhou College of Intelligence and Computing, Tianjin University, Yang Liu Nanyang Technological University, Singapore, Chi Xu Singapore Institute of Manufacturing Technology, A*Star, Jianjun Zhao Kyushu University | ||
09:50 20mTalk | Metamorphic Object Insertion for Testing Object Detection Systems Research Papers |