Mon 26 Oct 2020 02:24 - 02:36 at Infante - Teasers Chair(s): Jacques Klein, David Lo
Performance testing remains a challenge, particularly for complex systems. Different application-, platform- and workload-based factors can influence the performance of software under test. Common approaches for generating platform- and workload-based test conditions are often based on system model or source code analysis, real usage modeling and use-case based design techniques. Nonetheless, creating a detailed performance model is often difficult, and also those artifacts might not be always available during the testing. On the other hand, test automation solutions such as automated test case generation can enable effort and cost reduction with the potential to improve the intended test criteria coverage. Furthermore, if the optimal way (policy) to generate test cases can be learnt by testing system, then the learnt policy can be reused in further testing situations such as testing variants, evolved versions of software, and different testing scenarios. This capability can lead to additional cost and computation time saving in the testing process. In this research, we present an autonomous performance testing framework which uses a model-free reinforcement learning augmented by fuzzy logic and self-adaptive strategies. It is able to learn the optimal policy to generate platform- and workload-based test conditions which result in meeting the intended testing objective without access to system model and source code. The use of fuzzy logic and self-adaptive strategy helps to tackle the issue of uncertainty and improve the accuracy and adaptivity of the proposed learning. Our evaluation experiments show that the proposed autonomous performance testing framework is able to generate the test conditions efficiently and in a way adaptive to varying testing situations.