CEGen: Cause-Effect Graph Generation Using Large Language Models (Poster)
This program is tentative and subject to change.
Black-box testing plays an essential role in the quality assurance of software development, focusing on the external behavior of systems without considering their internal structures. The Cause-Effect Graph (CEG) is a black-box testing technique that visualizes the relationships between system inputs and outputs in terms of causes and effects. Originally developed for addressing complex logical scenarios in testing, CEG is applied across diverse fields, including hardware testing and software safety assessments. Despite its utility, creating CEGs is labor-intensive and demands substantial expertise. Existing techniques to generate CEGs face challenges in handling natural language specifications and offer limited scope of application. This study presents a novel method employing Large Language Models (LLMs) to generate CEGs from natural language specifications. The proposed approach utilizes LLMs to create truth tables from specification statements and subsequently constructs CEGs through algorithmic processes, making it easier for non-experts to generate valid CEGs. The effectiveness of the method is evaluated through ten problems from a tester training book, with a 52% success rate in producing error-free CEGs.