The Workshop on Automating Testing (A-TEST, https://a-test.org/) provides a venue for researchers and industry members alike to exchange and discuss trending views, ideas, state of the art, work in progress, and scientific results on automated testing.
This year marks the 15th anniversary of the workshop, with the special theme “Using DSLs for test automation and the automated testing of DSLs.”, exactly at the intersection of ECOOP and ISSTA.
Topics of interest include, but are not limited to:
- Effective DSLs for testing.
- Integrating DSLs into existing unit testing frameworks like Selenium, JUnit, or PyTest.
- DSLs for API testing (including the GUI).
- Languages for specific testing domains like web, mobile, cloud, and embedded systems testing.
- Tools, IDEs, and environments that support the development and use of DSLs in test automation.
- DSLs and Model-Based Testing (MBT), exploring the intersections between DSLs and MBT.
- Real-world applications, success stories, challenges, and lessons learned from using DSLs in industrial settings.
- Novel ideas, emerging trends, and future directions in the development and application of DSLs for test automation
- Testing of DSLs
Thu 19 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:00 - 10:30 | |||
10:30 - 12:00 | Hands-on: Testing DSLs with DSLs in Rascal and TESTARA-TEST at EI 5 Hochenegg In celebration of the 15th anniversary of A-TEST, we have chosen a theme that aligns well with both ISSTA and ECOOP: “Using DSLs for testing and the testing of DSLs.” To mark this A-TEST milestone and to put the “work” in workshop, we are hosting a hands-on session where participants can roll up their sleeves and actively engage in experimenting to test a DSL QL, a language designed for creating web-based questionnaire forms. When testing a DSL, it is essential to focus on both the language’s implementation (including syntax, type checking, evaluation and rendering) as well as the correct execution of the code generated by the compiler. These aspects ensure the DSL is both robust and effective in its intended domain. In our workshop, we will approach this from two angles. For testing the language implementation part, we will use a DSL, named TestQL, which been explicitly written in Rascal for testing purposes. For the execution part, we will employ automated scriptless GUI testing with TESTAR to verify the execution of the generated code. But that is not all—the oracles used during the scriptless testing will be automatically generated by using the domain-specific knowledge of state invariants within the DSL, demonstrating how domain knowledge can enhance testing effectiveness. | ||
12:00 - 13:30 | |||
13:30 - 15:00 | |||
15:00 - 15:30 | |||
15:30 - 17:00 | |||
15:30 30mTalk | First Experiments on Automated Execution of Gherkin Test Specifications with Collaborating LLM Agents A-TEST Severin Bergsmann Software Competence Center Hagenberg (SCCH), Alexander Schmidt Software Competence Center Hagenberg (SCCH), Stefan Fischer Software Competence Center Hagenberg, Rudolf Ramler Software Competence Center Hagenberg (SCCH) | ||
16:00 30mTalk | GreeDDy: Accelerate Parallel DDMIN A-TEST Daniel Vince University of Szeged, Department of Software Engineering, Ákos Kiss University of Szeged, Hungary | ||
16:30 30mTalk | Use of ChatGPT as an Assistant in the End-to-End Test Script Generation for Android Apps A-TEST Boni Garcia Universidad Carlos III de Madrid, Maurizio Leotta DIBRIS, University of Genova, Italy, Filippo Ricca DIBRIS, Università di Genova, Jim Whitehead University of California, Santa Cruz |
18:00 - 20:00 | |||
Accepted Papers
Call for Papers
Authors are invited to submit papers to the workshop, and present and discuss them at the event on topics related to DSLs and automated software testing. Paper submissions can be of the following types:
-
Full papers (max 8 pages, including references) describing original, complete, and validated research – either empirical or theoretical – in A-TEST related techniques, tools, or industrial case studies.
-
Work-in-progress papers (max. 4 pages) that describe novel, interesting, and high-potential work in progress, but not necessarily reaching full completion (e.g., not completely validated).
-
Tool papers (max. 4 pages) presenting some testing tool in a way that it could be presented to industry as a start of successful technology transfer.
-
Technology transfer papers (max. 4 pages) describing industry-academia co-operation. Position papers (max. 2 pages) that analyse trends and raise issues of importance. Position papers are intended to generate discussion and debate during the workshop.
Topics of interest include, but are not limited to: - Effective DSLs for testing.
-
Integrating DSLs into existing unit testing frameworks like Selenium, JUnit, or PyTest.
-
DSLs for API testing (including the GUI).
-
Languages for specific testing domains like web, mobile, cloud, and embedded systems testing.
-
Tools, IDEs, and environments that support the development and use of DSLs in test automation.
-
DSLs and Model-Based Testing (MBT), exploring the intersections between DSLs and MBT.
-
Real-world applications, success stories, challenges, and lessons learned from using DSLs in industrial settings.
-
Novel ideas, emerging trends, and future directions in the development and application of DSLs for test automation Testing of DSLs.
All submissions must be in English and in PDF format. Submissions must conform to the ACM Conference Format (https://www.acm.org/publications/proceedings-template). A-TEST 2024 will employ a single-blind review process.
Contributions must be submitted through EasyChair: https://easychair.org/conferences/?conf=atest2024
Each submission will be reviewed by at least three members of the program committee. Full papers will be evaluated on the basis of originality, importance of contribution, soundness, evaluation, quality of presentation, and appropriate comparison to related work. Work-in-progress and position papers will be reviewed with respect to relevance and their ability to start up fruitful discussions. Tool and technology transfer papers will be evaluated based on improvement on the state-of-the-practice and clarity of lessons learned. Submitted papers must not have been published elsewhere and must not be under review or submitted for review elsewhere during the duration of consideration. To prevent double submissions, the chairs may compare the submissions with related conferences that have overlapping review periods. The double submission restriction applies only to refereed journals and conferences, not to unrefereed pre-publication archive servers (e.g., arXiv.org). Submissions that do not comply with the foregoing instructions will be desk rejected without being reviewed.
All accepted contributions will appear in the ACM Digital Library, providing a lasting archived record of the workshop proceedings. At least one author of each accepted paper must register and present the paper in person at A-TEST 2024 in order for the paper to be published in the proceedings.
AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work.
All questions about submissions should be emailed to the workshop organisers at atest2024@easychair.org