ICGT 2025
Wed 11 - Thu 12 June 2025 Koblenz, Germany
co-located with STAF 2025
Thu 12 Jun 2025 15:30 - 16:00 at M 201 - ICGT Session 6 - Journal First

Today, model checking is one of the essential techniques in the verification of software systems. This technique can verify some properties such as reachability in which the entire state space is searched to find the desired state. However, model checking may lead to the state space explosion problem in which all states cannot be generated due to the exponential resource usage. Although the results of recent model checking approaches are promising, there is still room for improvement in terms of accuracy and the number of explored states. In this paper, using deep reinforcement learning and two neural networks, we propose an approach to increase the accuracy of the generated witnesses and reduce the use of hardware resources. In this approach, at first, an agent starts to explore the state space without any knowledge and gradually identifies the proper and improper actions by receiving different rewards/penalties from the environment to achieve the goal. Once the dataset is fulfilled with the agent’s experiences, two neural networks evaluate the quality of each operation in each state, and afterwards, the best action is selected. The significant difficulties and challenges in the implementation are encoding the states, feature engineering, feature selection, reward engineering, handling invalid actions, and configuring the neural network. Finally, the proposed approach has been implemented in the Groove toolset, and as a result, in most of the case studies, it overcame the problem of state space explosion. Also, this approach outperforms the existing solutions in terms of generating shorter witnesses and exploring fewer states. On average, the proposed approach is nearly 400% better than other approaches in exploring fewer states and 300% better than the others in generating shorter witnesses. Also, on average, the proposed approach is 37% more accurate than the others in terms of finding the goals state.

Thu 12 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

15:30 - 17:00
ICGT Session 6 - Journal FirstICGT Research Papers at M 201

Session Chair: Gabriele Taentzer

15:30
30m
Talk
Using deep reinforcement learning to search reachability properties in systems specified through graph transformationtions
ICGT Research Papers
Vahid Rafe City St George’s, University of London, Mohammadjavad Mehrabi Arak University
Link to publication DOI
16:00
30m
Talk
Whole test suite generation from graph transformation specifications using ant colony optimization
ICGT Research Papers
Simin Ghasemi Arak University, Vahid Rafe City St George’s, University of London, Anvar Bahrampour Islamic Azad University, Reiko Heckel University of Leicester
Link to publication DOI