IEEE AITest 2025CISOSE 2025
IEEE AITest 2025 is the seventh edition of the IEEE series conference, focusing on the synergy of artificial intelligence (AI) and software testing. This conference provides an international forum for researchers and practitioners to exchange novel research results, articulate the problems and challenges from practices, deepen our understanding of the subject area with new theories, methodologies, techniques, process models, impacts, etc., and improve the practices with new tools and resources. This year’s conference is scheduled in Tucson, Arizona, USA, from 21-24 July 2025. The conference is part of the IEEE CISOSE 2025 congress.
Topics of Interest
- Methodologies, theories, techniques, and tools for testing, verification, and validation of AI
- Test Oracle for testing AI
- Tools and resources for automated testing of AI
- Specific concerns of testing with domain specific AI
- AI techniques to software testing
- AI applications to software testing
- Testing of Large Language Models (LLMs)
- Data quality and validation for AI
- Human testers and AI-based testing
- Quality assurance for unstructured training data
- Quality evaluation and assurance for LLMs
- LLMs for software engineering and testing
- Responsible AI testing
- Techniques for testing deep neural network learning, reinforcement learning, and graph learning
- Constraint programming for test case generation and test suite reduction
- Constraint scheduling and optimization for test case prioritization and test execution scheduling
- Crowdsourcing and swarm intelligence in software testing
- Genetic algorithms, search-based techniques, and heuristics to optimize testing
- Large-scale unstructured data quality certification
- AI and data management policies
- Impact of GAI on education
- Computer Vision Testing
- Intelligent Chatbot Testing
- Smart Machine (Robot/AV/UAV) Testing
- Fairness, ethics, bias, and trustworthiness for LLM applications
Important Dates
Main paper
- April 1st, 2025 Submission deadline
- May 10th, 2025 Author’s notification
- June 1st, 2025 Camera-ready and author’s registration
Workshop paper
- May 15th, 2025 Submission deadline
- May 22nd, 2025 Author’s notification
- June 1st, 2025 Camera-ready and author’s registration
Submission
Submit original manuscripts (not published or submitted elsewhere) with the following page limits: regular papers (8 pages), short papers (4 pages), AI testing in practice (8 pages), and tool demo track (6 pages). We welcome submissions of both regular research papers that describe original and significant work or reports on case studies and empirical research and short papers that describe late-breaking research results or work in progress with timely and innovative ideas. All types of papers can have 2 extra pages subject to page charges. The AI Testing in Practice Track provides a forum for networking, exchanging ideas and innovative or experimental practices to address SE research that directly impacts the practice of software testing for AI. The tool track provides a forum to present and demonstrate innovative tools and/or new benchmarking datasets in the context of software testing for AI. All papers must be written in English. Papers must include a title, an abstract, and a list of 4-6 keywords. All papers must be prepared in the IEEE double-column proceedings format. Authors submit their papers via the link by April 1, 2025, 23:59 AoE. For more information, please visit the conference website. The use of content generated by AI in an article (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section of the submitted article.
Conference Proceedings & Special Section of SCI journals
All accepted papers will be published by IEEE Computer Society Press (EI-Index) and included in the IEEE Digital Library. The best papers will be invited to submit an extended version (with at least 30% novel content) to the selected special issues (TBA).
Call for Papers
Topics of Interest
Topics of interest include, but are not limited to:
- Methodologies, theories, techniques, and tools for testing, verification, and validation of AI
- Test Oracle for testing AI
- Tools and resources for automated testing of AI
- Specific concerns of testing with domain specific AI
- AI techniques to software testing
- AI applications to software testing
- Testing of Large Language Models (LLMs)
- Data quality and validation for AI
- Human testers and AI-based testing
- Quality assurance for unstructured training data
- Quality evaluation and assurance for LLMs
- LLMs for software engineering and testing
- Responsible AI testing
- Techniques for testing deep neural network learning, reinforcement learning, and graph learning
- Constraint programming for test case generation and test suite reduction
- Constraint scheduling and optimization for test case prioritization and test execution scheduling
- Crowdsourcing and swarm intelligence in software testing
- Genetic algorithms, search-based techniques, and heuristics to optimize testing
- Large-scale unstructured data quality certification
- AI and data management policies
- Impact of GAI on education
- Computer Vision Testing
- Intelligent Chatbot Testing
- Smart Machine (Robot/AV/UAV) Testing
- Fairness, ethics, bias, and trustworthiness for LLM applications
Important Dates
Main paper
- April 1st, 2025 Submission deadline
- May 10th, 2025 Author’s notification
- June 1st, 2025 Camera-ready and author’s registration
Workshop paper
- May 15th, 2025 Submission deadline
- May 22nd, 2025 Author’s notification
- June 1st, 2025 Camera-ready and author’s registration
Submission
Submit original manuscripts (not published or submitted elsewhere) with the following page limits:
- Regular papers (8 pages, IEEE double-column format)
- Short papers (4 pages, IEEE double-column format)
- AI testing in practice (8 pages, IEEE double-column format)
- Tool demo track (6 pages, IEEE double-column format)
We welcome submissions of both regular research papers that describe original and significant work or reports on case studies and empirical research and short papers that describe late-breaking research results or work in progress with timely and innovative ideas. All types of papers can have 2 extra pages subject to page charges. The AI Testing in Practice Track provides a forum for networking, exchanging ideas and innovative or experimental practices to address SE research that directly impacts the practice of software testing for AI. The tool track provides a forum to present and demonstrate innovative tools and/or new benchmarking datasets in the context of software testing for AI. All papers must be written in English. Papers must include a title, an abstract, and a list of 4-6 keywords. All papers must be prepared in the IEEE double-column proceedings format. Authors submit their papers via the link by April 1, 2025, 23:59 AoE. For more information, please visit the conference website. The use of content generated by AI in an article (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section of the submitted article.
Conference Proceedings & Special Section of SCI journals
All accepted papers will be published by IEEE Computer Society Press (EI-Index) and included in the IEEE Digital Library. The best papers will be invited to submit an extended version (with at least 30% novel content) to the selected special issues (TBA).
TPC Members
Attend
- Travel Information
- Venue Information
Sponsors
Conference Organizers
- Organizers
- TPC Members