Time-Aware Coverage Criteria for Testing of AI-Enabled Hybrid Control Systems
Modern Cyber-Physical Systems (CPSs) that need to perform complex control tasks (e.g., autonomous driving) are increasingly using AI-enabled controllers, mainly based on deep neural networks (DNNs). The quality assurance of such types of systems is of vital importance. However, their verification can be extremely challenging, due to their complexity and uninterpretable decision logic. Falsification is an established approach for CPS quality assurance, which, instead of attempting to prove the system correctness, aims at finding a time-variant input signal violating a formal specification describing the desired behaviour; it often employs a search-based testing approach that tries to minimize the robustness of the specification, given by its quantitative semantics. However, guidance provided by robustness is mostly black-box and only related to the system output, but does not allow to understand whether the temporal internal behaviour of the neural network controller has been explored sufficiently. To bridge this gap, in this paper, we make an early attempt and first propose four time-aware coverage criteria specifically designed for neural network controllers in the context of CPS, which consider different features by design: the simple temporal activation of a neuron, the continuous activation of a neuron for a given duration, and the differential neuron activation behavior over time. We further show that these criteria can be employed in the falsification process, by providing more exploration in the search. Preliminary experiments have been performed on Adaptive Cruise Control system, and show that considering coverage during falsification increases the falsification rate.