Restricted Natural Language and Model-based Adaptive Test Generation for Autonomous Driving
With the ultimate goal of reducing car accidents, autonomous driving attracted a lot of attention these years. However, recently reported crashes indicate that this goal is far from being achieved. Hence, cost-effective testing of autonomous driving systems (ADSs) has become a prominent research topic. The classical model-based testing (MBT), i.e., generating test cases from test models followed by executing the test cases, is ineffective for testing ADSs. This is mainly because ADSs are constantly exposed to ever-changing operating environments, and their uncertain internal behaviors due to the employed AI techniques. Thus, MBT must be adaptive to guide test case generation based on test execution results in a step-wise manner. To this end, we propose a natural language and model-based approach, named LiveTCM, to automatically execute and generate test case specifications (TCSs) by interacting with an ADS under test and its environment. LiveTCM is evaluated with an open-source ADS and two test generation strategies: Deep Q-Network (DQN)-based and Random. Results show that LiveTCM with DQN can generate TCSs with 56 steps on average in 60 seconds, leading to 6.4 test oracles violations and covering 14 APIs per TCS on average.
Wed 13 OctDisplayed time zone: Osaka, Sapporo, Tokyo change
19:00 - 20:00
|Restricted Natural Language and Model-based Adaptive Test Generation for Autonomous DrivingP&I|
|DataTime: A Framework to Smoothly Integrate Past, Present and Future into ModelsP&I|
|Model-Driven Simulation-Based Analysis for Multi-Robot SystemsFT|