IEEE AITest 2025 is the seventh edition of the IEEE series conference, focusing on the synergy of artificial intelligence (AI) and software testing. This conference provides an international forum for researchers and practitioners to exchange novel research results, articulate the problems and challenges from practices, deepen our understanding of the subject area with new theories, methodologies, techniques, process models, impacts, etc., and improve the practices with new tools and resources. This year’s conference is scheduled in Tucson, Arizona, USA, from 21-24 July 2025. The conference is part of the IEEE CISOSE 2025 congress.

Topics of Interest

Topics for IEEE AITest 2025 encompass key methodologies for verifying and validating AI systems, along with the innovative use of AI for software testing. They also address the challenges and emerging areas in large language models, data quality, policy, domain-specific AI testing, and the ethical implications of responsible AI development.

For specific topics, please refer to the Call for Paper Page.

News

Contact

For more information and any questions, please contact the AITest 2025 PC chairs:

Topics of Interest

Topics of interest include, but are not limited to:

1. AI for System Testing

  • Methodologies, theories, techniques, and tools for testing, verification, and validation of AI
  • Test Oracle for testing AI
  • Tools and resources for automated testing of AI
  • Techniques for testing deep neural network learning, reinforcement learning, and graph learning

2. AI for Software Testing

  • AI techniques to software testing
  • AI applications to software testing
  • Human testers and AI-based testing
  • Crowdsourcing and swarm intelligence in software testing
  • Genetic algorithms, search-based techniques, and heuristics to optimize testing
  • Constraint programming for test case generation and test suite reduction
  • Constraint scheduling and optimization for test case prioritization and test execution scheduling

3. Large Language Models (LLMs)

  • Testing of Large Language Models (LLMs)
  • Quality evaluation and assurance for LLMs
  • LLMs for software engineering and testing
  • Fairness, ethics, bias, and trustworthiness for LLM applications

4. Data Quality and Policy

  • Data quality and validation for AI
  • Quality assurance for unstructured training data
  • Large-scale unstructured data quality certification
  • AI and data management policies

5. Domain-Specific Testing

  • Specific concerns of testing with domain-specific AI
  • Computer Vision Testing
  • Intelligent Chatbot Testing
  • Smart Machine (Robot/AV/UAV) Testing
  • Impact of GAI on education
  • Responsible AI testing

Important Dates

Main paper

  • April 1st, 2025 Submission deadline
  • May 10th, 2025 Author’s notification
  • June 1st, 2025 Camera-ready and author’s registration

Workshop paper

  • May 15th, 2025 Submission deadline
  • May 22nd, 2025 Author’s notification
  • June 1st, 2025 Camera-ready and author’s registration

Submission

Page Limits

Submit original manuscripts (not published or submitted elsewhere) with the following page limits:

  • regular papers (8 pages)
  • short papers (4 pages)
  • AI testing in practice (8 pages)
  • tool demo track (6 pages).

Content

We welcome submissions of both regular research papers that describe original and significant work or reports on case studies and empirical research and short papers that describe late-breaking research results or work in progress with timely and innovative ideas. The AI Testing in Practice Track provides a forum for networking, exchanging ideas, and innovative or experimental practices to address SE research that directly impacts the practice of software testing for AI. The tool track provides a forum to present and demonstrate innovative tools and/or new benchmarking datasets in the context of software testing for AI.

Formats and Submission Instructions

  • All papers must be written in English. Papers must include a title, an abstract, and a list of 4-6 keywords.
  • All types of papers can have 2 extra pages subject to page charges.
  • All papers must be prepared in the IEEE double-column proceedings format
  • Authors must submit their manuscripts via easychair IEEE AI Test 2025 by April 1, 2025, 23:59 AoE. at the latest.

For more information, please visit the conference website. The use of content generated by AI in an article (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section of the submitted article.


Conference Proceedings & Special Section of SCI journals

  • All accepted papers will be published by IEEE Computer Society Press (EI-Index) and included in the IEEE Digital Library.
  • The best papers will be invited to submit an extended version (with at least 30% novel content) to the selected special issues (TBA).

Aims and Scope

The 2025 IEEE International Conference on Artificial Intelligence Testing (AITest) is pleased to invite proposals for workshops and tutorials. Workshops are intended to bring together communities of interest, both in established communities and in communities interested in the discussion and exploration of a new or emerging issue. They can range in format from formal, perhaps centering on the presentation of refereed papers, to informal, perhaps centering on an extended roundtable discussion among the selected participants.

Tutorials provide an opportunity to offer in-depth education on a topic or solution relevant to research or practice in AI Testing. They should address a single topic in detail. They are not intended to be venues for commercial product training. Workshops and tutorials can span a half-day or a full day.

Proposal Format

Successful proposals should include:

  • The title of the workshop or tutorial
  • A description of the workshop/tutorial, including whether it is planned as a half-day or full-day event
  • A topical outline of the workshop/tutorial
  • Anticipated audience for the workshop/tutorial
  • Expected outcomes of the workshop/tutorial
  • Short biographies and contact information of the proposers

Submission Guidelines

All submissions must be in English, in PDF format, in the current IEEE double-column proceedings format, and up to 2 pages. Suitable LaTeX, Word, and Overleaf templates are available from the IEEE Website.

All proposals should be submitted by email to:

(Subject: “IEEE AITest 2025 Workshop/Tutorial proposal”)

Organization and Logistics

All workshops and tutorials are planned as in-person events. However, hybrid events (with at least one organizer present in person) can also be accommodated. Please indicate your plan and preference in your proposal.

Important Dates

All dates are Anywhere on Earth (AoE)

  • April 1, 2025 – Deadline for submission
  • April 7, 2025 – Notification of acceptance
  • May 15, 2025 – Workshop paper submission deadline
  • May 22, 2025 – Workshop paper author’s notification
  • July 23-24, 2025 – Workshop or Tutorial date (tentative)

Contact

For more information and any questions, please contact the AITest 2025 PC chairs:

Aims and Scope

Panels at the 2025 IEEE International Conference on Artificial Intelligence Testing (AITest) are intended to draw together communities of interest, including testing AI systems, using AI techniques for software testing, as well as those involving emerging issues or techniques of interest to members of the community at large.

The panels typically last about 60–90 minutes and include an extended round-table discussion among the selected participants and the audience members. All proposals are welcome to suggest panel formats that will engage and inform the audience and, if accepted, AITest 2025 will work to provide appropriate facilities and setups to enable the panel techniques. Panels can be comprised of short position statements followed by discussion or can be structured as conversations that engage audience members from the outset. While topics are open, preference will be given to panels that align with the topics of interest of AITest 2025.

Proposal Format

Submissions should include:

  • A statement of goals or learning objectives for the panel
  • An outline for the panel topics
  • The expected audience and expected number of attendees
  • A tentative list of panelists and their bios. Please indicate whether the panelists have already been contacted about the panel.
  • A discussion of any engagement techniques that will require specific physical or technical requirements for the local hosts (e.g., part of the speakers being online). Please note that at least one panelist and/or organizer must be physically on site.
  • Contact and biographical information about the organizers. (It is possible for organizers to serve as panelists as well, but this is not a requirement.) Note organizers’ prior experience with organizing any similarly themed panel or workshop.

Submission Guidelines

All submissions must be in English, in PDF format, in the current IEEE double-column proceedings format, and up to 2 pages. Suitable LaTeX, Word, and Overleaf templates are available from the IEEE Website.

All proposals should be submitted by email to:

(Subject: “IEEE AITest 2025 Panel proposal”)

Important Dates

All dates are Anywhere on Earth (AoE)

  • April 1, 2025 – Deadline for submission
  • April 7, 2025 – Notification of acceptance
  • July 23-24, 2025 – Workshop or Tutorial date (tentative)

Contact

For more information and any questions, please contact the AITest 2025 PC chairs:

General Chairs

Antonia Bertolino

  • National Research Council, Italy

Jerry Gao

  • San Jose State University

Hong Zhu

  • Oxford Brookes University

PC Chairs

Monowar Bhuyan

  • Umea University

Haihua Chen

  • University of North Texas

Website Chair

Yuhan Zhou

  • University of North Texas

TPC Members

Rob Alexander

  • University of York

Muhammad Atif

  • University of Florence

Christian Berger

  • University of Gothenburg

Adil Bin Bhutto

  • Umeå University

Michele Carminati

  • Politecnico di Milano

W.K. Chan

  • City University of Hong Kong

Jaganmohan Chandrasekaran

  • Virginia Polytechnic Institute and State University (Virginia Tech)

T.Y. Chen

  • Swinburne University of Technology

Zhenbang Chen

  • National University of Defense Technology, Changsha, China

S.C. Cheung

  • The Hong Kong University of Science and Technology

Stanislav Chren

  • Aalto University

Claudio De La Riva

  • Universidad de Oviedo

Anurag Dwarakanath

  • Accenture Technology Labs

Chunrong Fang

  • Software Institute of Nanjing University

Yunhe Feng

  • University of North Texas

Gordon Fraser

  • University of Passau

Lingzi Hong

  • University of North Texas

Beilei Jiang

  • University of North Texas

Bo Jiang

  • Beihang University

Foutse Khomh

  • DGIGL, École Polytechnique de Montréal

Yu Lei

  • University of Texas at Arlington

J. Jenny Li

  • Kean University

Dongmei Liu

  • Nanjing University of Science and Technology

Francesca Lonetti

  • CNR-ISTI

Dusica Marijan

  • Simula

Richard Millham

  • Durban University of Technology

Deepanjan Mitra

  • University of Calcutta

Andrea Polini

  • University of Camerino

Marc Roper

  • University of Strathclyde

Chang-Ai Sun

  • University of Science and Technology Beijing

Tatsuhiro Tsuchiya

  • Osaka University

Javier Tuya

  • Universidad de Oviedo

Mark Utting

  • The University of Queensland

Zhongyi Wang

  • Central China Normal University

Franz Wotawa

  • Technische Universitaet Graz

Obaidullah Zaland

  • Umeå University

Huanhuan Zhao

  • University of Tennessee at Knoxville

Zhiquan Zhou

  • NIO Inc.

Mohammad Zulkernine

  • Queen’s University, Canada

Hosted by Systems and Industrial Engineering at the University of Arizona, Tucson.

The venue will take place at the University of Arizona Library.

Location

1510 E University Blvd, 85721

Tucson, Arizona, United States

Questions? Use the CISOSE IEEE AITest contact form.