ASTA 2026: Call for Papers
View track page for all detailsAuthors are invited to submit papers to ASTA – Agentic AI in Software Testing and Automation, a workshop focused on autonomous, goal-driven AI agents and agentic systems applied to software testing, verification, and validation. The workshop aims to foster discussion on novel architectures, tools, empirical evidence, and open challenges related to agentic AI for testing.
The Workshop will be handled as a Hybrid workshop with the possibility of attending the workshop remotely.
Paper Types
ASTA welcomes the following types of contributions:
-
Regular research papers (max. 8 pages, references included) presenting original research results, novel techniques, architectures, empirical studies, or industrial experiences related to agentic AI for software testing and automation.
-
Short papers (max. 4 pages, references included) describing early-stage ideas, new research directions, preliminary results, or challenges intended to stimulate discussion within the community.
-
Extended abstracts (max. 2 pages) for tool demonstrations, vision papers, and hands-on sessions, showcasing agentic testing tools, workflows, benchmarks, or forward-looking perspectives.
Topics of Interest
Relevant topics include, but are not limited to:
-
Agentic and autonomous AI for software testing
-
LLM-based agents as intelligent testers
-
Agentic architectures for test generation, execution, and maintenance
-
Autonomous test agents for web, mobile, embedded, and cyber-physical systems
-
Functional and non-functional testing with agentic systems (e.g., usability, security)
-
Test oracle construction using agentic AI
-
Tool orchestration by autonomous testing agents (e.g., fuzzers, debuggers, load testers)
-
Empirical evaluations, benchmarks, and case studies of agent-driven testing
-
Evaluation metrics, controllability, reproducibility, and cost-effectiveness of agentic testing approaches
Paper Submission Guidelines
All submissions must be written in English and submitted in PDF format.
ASTA will employ a single-blind review process and each paper will be reviewed by two/three program committee members.
Papers must adhere to the IEEE publication format. Accepted papers will be published as part of the ICST Workshop Proceedings.
Submissions will be handled via EasyChair.
Selection criteria include relevance to agentic AI for software testing, technical soundness, originality, and potential to stimulate discussion. At least one author of each accepted paper must register for the workshop.