The 6th International Workshop on Artificial Intelligence in Software Testing (AIST 2026) is a physical event co-located with the 19th IEEE International Conference on Software Testing, Verification and Validation (ICST 2026). The workshop will be held in Daejeon, South Korea.
The integration of AI techniques in the domain of software testing represents a promising frontier, one that is still at the dawn of its potential. Over the past few years, software developers have witnessed a surge in innovative approaches aimed at streamlining the development lifecycle, with a particular focus on the testing phase. These approaches harness the capabilities of AI, including Convolutional Neural Networks (CNN), Deep Neural Networks (DNNs), and Large Language Models (LLMs), to transform the way we verify and validate software applications.
The adoption of AI in software testing yields numerous advantages. It significantly reduces the time and effort invested in repetitive and mundane testing tasks, allowing human testers to focus on more complex and creative aspects of testing, such as exploratory testing and user experience evaluation. Additionally, AI-driven testing improves software quality by enhancing test coverage and mutation score. The outcome is not just cost savings but also increased customer satisfaction, as the likelihood of critical software defects making it into production is greatly diminished.
The AIST workshop aspires to bring together a diverse community of researchers and practitioners. It aims to create a platform for the presentation and discussion of cutting-edge research and development initiatives in the areas of AI-driven software testing. The workshop encourages collaboration, facilitating the exchange of knowledge and ideas, and fostering a holistic understanding of the potential applications that AI offers in the context of software testing. By acknowledging the broad spectrum of perspectives and topics within the AI umbrella, AIST seeks to be a catalyst for innovation, ultimately ushering in a path for software testing efficiency and effectiveness.
If you have any questions about the workshop, you can contact the chairs:
- Aurora Ramírez (aramirez at uco.es)
- Jeongju Sohn (jeongju.sohn at knu.ac.kr)
Call for Papers
Theme and Goals
The integration of AI techniques in the domain of software testing represents a promising frontier, one that is still at the dawn of its potential. Over the past few years, software developers have witnessed a surge in innovative approaches aimed at streamlining the development lifecycle, with a particular focus on the testing phase. These approaches harness the capabilities of AI, including Convolutional Neural Networks (CNN), Deep Neural Networks (DNNs), and Large Language Models (LLMs), to transform the way we verify and validate software applications.
The adoption of AI in software testing yields numerous advantages. It significantly reduces the time and effort invested in repetitive and mundane testing tasks, allowing human testers to focus on more complex and creative aspects of testing, such as exploratory testing and user experience evaluation. Additionally, AI-driven testing improves software quality by enhancing test coverage and mutation score. The outcome is not just cost savings but also increased customer satisfaction, as the likelihood of critical software defects making it into production is greatly diminished.
The AIST workshop aspires to bring together a diverse community of researchers and practitioners. It aims to create a platform for the presentation and discussion of cutting-edge research and development initiatives in the areas of AI-driven software testing. The workshop encourages collaboration, facilitating the exchange of knowledge and ideas, and fostering a holistic understanding of the potential applications that AI offers in the context of software testing. By acknowledging the broad spectrum of perspectives and topics within the AI umbrella, AIST seeks to be a catalyst for innovation, ultimately ushering in a path for software testing efficiency and effectiveness.
Recently, AI systems have also garnered interest in the testing community as ``systems-under-test''. The increasing adoption of AI across multiple software systems and domains (autonomous driving, facial recognition systems, etc.) demands a shift in how software testing is conducted. The study of how software testing should evolve in the era of AI is also within the scope of this workshop. The submissions are expected to present new techniques, as well as case studies, experience reports, benchmarking, and best practices. We also seek to promote industry case studies with lessons learned or practical guidelines. New ideas, emerging results, and position papers are also welcome.
Call for Papers
We invite novel papers from both academia and industry on AI applied to software testing or software testing applied to AI that cover, but are not limited to, the following aspects:
- AI for test case design, test generation, test prioritization, and test reduction.
- AI for load testing and performance testing.
- AI for monitoring running systems or optimizing those systems.
- Explainable AI for software testing.
- Testing of AI systems.
- Case studies, experience reports, benchmarking, and best practices.
- New ideas, emerging results, and position papers.
- Industrial case studies with lessons learned or practical guidelines.
Papers can be of one of the following types:
- Full Papers (max. 8 pages): Papers presenting mature research results or industrial practices.
- Short Papers (max. 4 pages): Papers presenting new ideas or preliminary results.
- Tool Papers (max. 4 pages): Papers presenting an AI-enabled testing tool. Tool papers should communicate the purpose and use cases for the tool. The tool should be made available (either free to download or for purchase).
- Position Papers (max. 2 pages): Position statements and open challenges, intended to spark discussion or debate.
The reviewing process is single blind. Therefore, papers do not need to be anonymized. Papers must conform to the two-column IEEE conference publication format. All submissions must be original, unpublished, and not submitted for publication elsewhere. Submissions will be evaluated according to the relevance and originality of the work and on their ability to generate discussions between the participants of the workshop. Each submission will be reviewed by three reviewers, and all accepted papers will be published as part of the ICST proceedings. For all accepted papers, at least one author must register in the workshop and present the paper.
Submission site: https://easychair.org/conferences/?conf=aist2026
Important Dates
- Submission deadline:
27 February 20266 March 2026 (extended) - Notification of acceptance: 27 March 2026
- Camera-ready: 10 April 2026
- Workshop: TBA
Contact
If you have any questions about the workshop, you can contact the chairs:
- Aurora Ramírez (aramirez at uco.es)
- Jeongju Sohn (jeongju.sohn at knu.ac.kr)