Control systems are ubiquitous and often at the core of Cyber-Physical Systems, like cars and aeroplanes. They are implemented as embedded software, that interacts in closed loop with the physical world through sensors and actuators. As a consequence, the software cannot just be tested in isolation. To close the loop in a testing environment and root causing failure generated by different parts of the system, executable models are used to abstract specific components. Different testing setups can be implemented by abstracting different elements: the most common ones are model-in-the-loop, software-in-the-loop, hardware-in-the-loop, and real-physics-in-the-loop. In this paper, we discuss the properties of these setups and the types of faults they can expose. We develop a comprehensive case study using the Crazyflie, a drone whose software and hardware are open-source. We implement all the most common testing setups and ensure the consistent injection of faults in each of them. We inject faults in the control system and we compare with the nominal performance of the non-faulty software. Our results show the specific capabilities of the different setups in exposing faults. Contrary to intuition and previous literature, we show that the setups do not belong to a strict hierarchy and they are best designed to maximize the differences across them rather than to be as close as possible to reality.
Wed 17 AprDisplayed time zone: Lisbon change
14:00 - 15:30 | Testing 2Research Track / Software Engineering Education and Training / Software Engineering in Practice / Demonstrations / Journal-first Papers at Eugénio de Andrade Chair(s): Jonathan Bell Northeastern University | ||
14:00 15mTalk | Ripples of a Mutation — An Empirical Study of Propagation Effects in Mutation Testing Research Track Hang Du University of California at Irvine, Vijay Krishna Palepu Microsoft, James Jones University of California at Irvine DOI | ||
14:15 15mTalk | Fast Deterministic Black-box Context-free Grammar Inference Research Track Mohammad Rifat Arefin The University of Texas at Arlington, Suraj Shetiya University of Texas at Arlington, Zili Wang Iowa State University, Christoph Csallner University of Texas at Arlington Pre-print Media Attached | ||
14:30 15mTalk | Bridging Theory to Practice in Software Testing Teaching through Team-based Learning (TBL) and Open Source Software (OSS) Contribution Software Engineering Education and Training | ||
14:45 15mTalk | Productive Coverage: Improving the Actionability of Code Coverage Software Engineering in Practice Marko Ivanković Google; Universität Passau, Goran Petrović Google Inc, Yana Kulizhskaya Google Inc, Mateusz Lewko Google Inc, Luka Kalinovčić No affiliation, René Just University of Washington, Gordon Fraser University of Passau | ||
15:00 15mTalk | Taming Timeout Flakiness: An Empirical Study of SAP HANA Software Engineering in Practice Pre-print | ||
15:15 7mTalk | Testing Abstractions for Cyber-Physical Control Systems Journal-first Papers Claudio Mandrioli University of Luxembourg, Max Nyberg Carlsson Lund University, Martina Maggio Saarland University, Germany / Lund University, Sweden Pre-print | ||
15:22 7mTalk | FaultFuzz: A Coverage Guided Fault Injection Tool for Distributed Systems Demonstrations Wenhan Feng Institute of Software, Chinese Academy of Sciences, Qiugen Pei Joint Laboratory on Cyberspace Security China Southern Power Grid, Yu Gao Institute of Software, Chinese Academy of Sciences, China, Dong Wang Institute of software, Chinese academy of sciences, Wensheng Dou Institute of Software Chinese Academy of Sciences, Jun Wei Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences; University of Chinese Academy of Sciences Chongqing School, Zheheng Liang Joint Laboratory on Cyberspace Security of China Southern Power Grid, Zhenyue Long Joint Laboratory on Cyberspace Security China Southern Power Grid Pre-print |