The integration of AI techniques in the domain of software testing represents a promising frontier, one that is still at the dawn of its potential. Over the past few years, software developers have witnessed a surge in innovative approaches aimed at streamlining the development lifecycle, with a particular focus on the testing phase. These approaches harness the capabilities of AI, including Convolutional Neural Networks (CNN), Deep Neural Networks (DNNs), and Large Language Models (LLMs), to transform the way we verify and validate software applications.
The adoption of AI in software testing yields numerous advantages. It significantly reduces the time and effort invested in repetitive and mundane testing tasks, allowing human testers to focus on more complex and creative aspects of testing, such as exploratory testing and user experience evaluation. Additionally, AI-driven testing improves software quality by enhancing test coverage and mutation score. The outcome is not just cost savings but also increased customer satisfaction, as the likelihood of critical software defects making it into production is greatly diminished.
The AIST workshop aspires to bring together a diverse community of researchers and practitioners. It aims to create a platform for the presentation and discussion of cutting-edge research and development initiatives in the areas of AI-driven software testing. The workshop encourages collaboration, facilitating the exchange of knowledge and ideas, and fostering a holistic understanding of the potential applications that AI offers in the context of software testing. By acknowledging the broad spectrum of perspectives and topics within the AI umbrella, AIST seeks to be a catalyst for innovation, ultimately ushering in a path for software testing efficiency and effectiveness.
For more information, see https://aistworkshop.github.io/
Tue 28 MayDisplayed time zone: Eastern Time (US & Canada) change
08:00 - 09:00 | BreakfastSocial | ||
08:00 60mOther | Breakfast & Registration Social |
09:00 - 10:30 | |||
09:00 15mDay opening | Workshop Opening AIST Gregory Gay Chalmers | University of Gothenburg, Sebastiano Panichella Zurich University of Applied Sciences, Aitor Arrieta Mondragon University | ||
09:15 60mKeynote | Towards Better Software Quality in the Era of Large Language Models AIST Lingming Zhang University of Illinois at Urbana-Champaign | ||
10:15 15mTalk | Generating Minimalist Adversarial Perturbations to Test Object-Detection Models: An Adaptive Multi-Metric Evolutionary Search Approach AIST Cristopher McIntyre-Garcia , Adrien Heymans University of Ottawa, Beril Borali University of Ottawa, Won-Sook Le University of Ottawa, Shiva Nejati University of Ottawa |
11:00 - 12:30 | |||
11:00 22mTalk | "No Free Lunch" when using Large Language Models to Verify Self-Generated Programs AIST | ||
11:22 22mTalk | An End-to-End Test Case Prioritization Framework using Optimized Machine Learning Models AIST Md Asif Khan Ontario Tech University, Akramul Azim Ontario Tech University, Ramiro Liscano Ontario Tech University, Kevin Smith International Business Machines Corporation (IBM), Yee-Kang Chang International Business Machines Corporation (IBM), Qasim Tauseef International Business Machines Corporation (IBM), Gkerta Seferi International Business Machines Corporation (IBM) | ||
11:45 22mTalk | Iterative Optimization of Hyperparameter-based Metamorphic Transformations AIST Gaadha Sudheerbabu Åbo Akademi University, Tanwir Ahmad Åbo Akademi University, Dragos Truscan Åbo Akademi University, Jüri Vain Tallinn University of Technology, Estonia, Ivan Porres Åbo Akademi University | ||
12:07 22mTalk | Machine Learning for Cross-Vulnerability Prediction in Smart Contracts AIST |
14:00 - 15:30 | |||
14:00 45mTutorial | Tutorial: A Hands-on Tutorial for Automatic Test Case Generation and Fuzzing for JavaScript AIST Mitchell Olsthoorn Delft University of Technology, Annibale Panichella Delft University of Technology | ||
14:45 45mTutorial | Tutorial: SoKotHban - Competitive Adversarial Testing of Sokoban Solvers AIST Addison Crump CISPA Helmholtz Center for Information Security |
16:00 - 17:30 | |||
16:00 75mPanel | Panel Discussion AIST | ||
17:15 15mDay closing | Workshop Closing AIST Aitor Arrieta Mondragon University, Gregory Gay Chalmers | University of Gothenburg, Sebastiano Panichella Zurich University of Applied Sciences |
Accepted Papers
Call for Papers
Theme and Goals
The integration of AI techniques in the domain of software testing represents a promising frontier, one that is still at the dawn of its potential. Over the past few years, software developers have witnessed a surge in innovative approaches aimed at streamlining the development lifecycle, with a particular focus on the testing phase. These approaches harness the capabilities of AI, including Convolutional Neural Networks (CNN), Deep Neural Networks (DNNs), and Large Language Models (LLMs), to transform the way we verify and validate software applications.
The adoption of AI in software testing yields numerous advantages. It significantly reduces the time and effort invested in repetitive and mundane testing tasks, allowing human testers to focus on more complex and creative aspects of testing, such as exploratory testing and user experience evaluation. Additionally, AI-driven testing improves software quality by enhancing test coverage and mutation score. The outcome is not just cost savings but also increased customer satisfaction, as the likelihood of critical software defects making it into production is greatly diminished.
The AIST workshop aspires to bring together a diverse community of researchers and practitioners. It aims to create a platform for the presentation and discussion of cutting-edge research and development initiatives in the areas of AI-driven software testing. The workshop encourages collaboration, facilitating the exchange of knowledge and ideas, and fostering a holistic understanding of the potential applications that AI offers in the context of software testing. By acknowledging the broad spectrum of perspectives and topics within the AI umbrella, AIST seeks to be a catalyst for innovation, ultimately ushering in a path for software testing efficiency and effectiveness.
Call for Papers
We invite novel papers from both academia and industry on AI applied to software testing that cover, but are not limited to, the following aspects:
- AI for test case design, test generation, test prioritization, and test reduction.
- AI for load testing and performance testing.
- AI for monitoring running systems or optimizing those systems.
- Explainable AI for software testing.
- Case studies, experience reports, benchmarking, and best practices.
- New ideas, emerging results, and position papers.
- Industrial case studies with lessons learned or practical guidelines.
Papers can be of one of the following types:
- Full Papers (max. 8 pages): Papers presenting mature research results or industrial practices.
- Short Papers (max. 4 pages): Papers presenting new ideas or preliminary results.
- Tool Papers (max. 4 pages): Papers presenting an AI-enabled testing tool. Tool papers should communicate the purpose and use cases for the tool. The tool should be made available (either free to download or for purchase).
- Position Papers (max. 2 pages): Position statements and open challenges, intended to spark discussion or debate.
The reviewing process is single blind. Therefore, papers do not need to be anonymized. Papers must conform to the two-column IEEE conference publication format and should be submitted via EasyChair using the following link: https://easychair.org/conferences/?conf=ieeeicst24workshops
All submissions must be original, unpublished, and not submitted for publication elsewhere. Submissions will be evaluated according to the relevance and originality of the work and on their ability to generate discussions between the participants of the workshop. Each submission will be reviewed by three reviewers, and all accepted papers will be published as part of the ICST proceedings. For all accepted papers, at least one author must register in the workshop and present the paper.
Important Dates
- Submission deadline: 29 January 2024 AoE
- Notification of Acceptance: 26 February 2024
- Camera-ready: 1 March 2024
- Workshop: 27 May 2024
Additional Information
For more information, see https://aistworkshop.github.io/