About ASTA
ASTA – Agentic AI in Software Testing and Automation is a workshop dedicated to exploring the use of agentic artificial intelligence for software testing, verification, and validation. Agentic AI systems combine large language models with capabilities such as planning, memory, and tool usage, enabling autonomous and goal-driven behavior with limited human oversight. When applied to software testing, these systems act as adaptive testers that can explore systems under test, generate and execute test scenarios, and reason about outcomes in dynamic and complex environments.
The workshop aims to bring together researchers and practitioners interested in advancing the state of the art in automated testing through agentic approaches. ASTA provides a focused forum to discuss novel architectures, methodologies, tools, and empirical evidence that demonstrate how autonomous agents can complement or outperform traditional testing techniques across a wide range of domains, including web, mobile, embedded, and cyber-physical systems.
ASTA emphasizes both foundational and applied perspectives. Topics range from the design of agentic testing architectures and intelligent test oracles to empirical studies, industrial experiences, and benchmarks assessing effectiveness, reproducibility, and cost-efficiency. By encouraging contributions at different stages of maturity—from early vision papers to fully developed research and tools—the workshop aims to foster community building and identify open challenges and future research directions in agent-driven software testing.
The workshop is organized as an interactive event featuring paper presentations, tool demonstrations, and open discussions, with the goal of stimulating collaboration between academia and industry and shaping a shared research agenda for agentic AI in software testing and automation.
The Workshop will be handled as a Hybrid workshop with the possibility of attending the workshop remotely.
Call for Papers
Authors are invited to submit papers to ASTA – Agentic AI in Software Testing and Automation, a workshop focused on autonomous, goal-driven AI agents and agentic systems applied to software testing, verification, and validation. The workshop aims to foster discussion on novel architectures, tools, empirical evidence, and open challenges related to agentic AI for testing.
The Workshop will be handled as a Hybrid workshop with the possibility of attending the workshop remotely.
Paper Types
ASTA welcomes the following types of contributions:
-
Regular research papers (max. 8 pages, references included) presenting original research results, novel techniques, architectures, empirical studies, or industrial experiences related to agentic AI for software testing and automation.
-
Short papers (max. 4 pages, references included) describing early-stage ideas, new research directions, preliminary results, or challenges intended to stimulate discussion within the community.
-
Extended abstracts (max. 2 pages) for tool demonstrations, vision papers, and hands-on sessions, showcasing agentic testing tools, workflows, benchmarks, or forward-looking perspectives.
Topics of Interest
Relevant topics include, but are not limited to:
-
Agentic and autonomous AI for software testing
-
LLM-based agents as intelligent testers
-
Agentic architectures for test generation, execution, and maintenance
-
Autonomous test agents for web, mobile, embedded, and cyber-physical systems
-
Functional and non-functional testing with agentic systems (e.g., usability, security)
-
Test oracle construction using agentic AI
-
Tool orchestration by autonomous testing agents (e.g., fuzzers, debuggers, load testers)
-
Empirical evaluations, benchmarks, and case studies of agent-driven testing
-
Evaluation metrics, controllability, reproducibility, and cost-effectiveness of agentic testing approaches
Paper Submission Guidelines
All submissions must be written in English and submitted in PDF format.
ASTA will employ a single-blind review process and each paper will be reviewed by two/three program committee members.
Papers must adhere to the IEEE publication format. Accepted papers will be published as part of the ICST Workshop Proceedings.
Submissions will be handled via EasyChair.
Selection criteria include relevance to agentic AI for software testing, technical soundness, originality, and potential to stimulate discussion. At least one author of each accepted paper must register for the workshop.