ISSTA 2022
Mon 18 - Fri 22 July 2022 Online

Artificial Intelligence (AI) has achieved substantial success in enhancing various software testing and program analysis techniques and applications, including but not limited to static analysis, fuzz testing, GUI testing, vulnerability detection, code similarity analysis, software debloating, and patching. We often see a synergistic effect that AI models, by learning from past experience to make decisions, can notably boost conventional program analysis and software testing tasks. Hence, it is a promising direction by applying advanced machine learning techniques into suitable software engineering tasks.

Furthermore, recent years have also witnessed a substantial adoption of AI models in safety- and security-critical applications such as medical image processing, autonomous driving, aircraft control systems, machine translation, and surveillance cameras. Thus, it is also highly crucial to apply software testing and program analysis techniques to ensure the robustness, fairness, explainability, and reliability of AI models, especially when AI is applied into safety- and security-critical applications.

The AISTA workshop aims to create an opportunity for the researchers to discuss their research, share recent ideas, and present new perspectives at the intersection of AI and Software Testing/Analysis, i.e., AI for Software Testing/Analysis and Software Testing/Analysis for AI. The workshop will consist of invited talks and presentations based on research paper submissions.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 18 Jul

Displayed time zone: Seoul change

15:00 - 18:00
Main SessionAISTA at AISTA
Chair(s): Lei Ma University of Alberta, Shuai Wang Hong Kong University of Science and Technology, Xiaofei Xie Singapore Management University, Singapore
Neural Network Fairness: Verification and Repair
Jun Sun Singapore Management University
A tool support for bug triage automation
Oskar Picus , Camelia Serban Department of Computer Science, Babes-Bolyai University
TEESlice: Slicing DNN Models for Secure and Efficient Deployment
Ziqi Zhang Peking University, Yifeng Cai , Yao Guo Peking University, Bingyan Liu Peking University, Ding Li Peking University, Lucien K. L. Ng , Xiangqun Chen Peking University
Test Case Prioritization based on Neural Networks Classification
Cristina-Maria Tiutin , Andreea Vescan Babes-Bolyai University

Call for Papers

CLARIFICATION: as a convention, authors only need to register a submission on easychair with paper title by the registration deadline which is Sun 1 May 2022 (AOE time). It is not necessary to submit the abstract or the full paper by the registration deadline. Please contact the co-chairs (e.g., if you have any questions/confusions about the submission.

CLARIFICATION: we are planning to postpone the registeration/submission deadline for a few days (haven’t confirmed yet). It’s likely that we will give a couple of days extension. Thank you for your interest in submitting to AISTA and please register/submit recently.

CLARIFICATION: AISTA is still open for submission until May 12th!

AISTA invites the following submissions:

  • Full papers (up to 8 pages; including references): Original, unpublished results that are related to the topics of AISTA.

  • New idea papers (up to 4 pages; including references): New ideas supported by initial evidence.

  • Fast abstract papers (up to 2 pages; including references)

We plan to organize AISTA as an one-day workshop, including a half-day keynote session and a half-day presentation of accepted papers. Note that all papers, including full papers, new idea papers, and fast abstract papers, will receive a full presentation and discussion slot.

AISTA focuses on the intersection of AI and Software Testing/Analysis. We welcome (but not limited to) the following topics:

  • AI for program static and dynamic analysis
  • AI for software testing
  • AI for other software engineering topics like optimization or code comprehension
  • Apply software testing techniques to AI models
  • Apply program static or dynamic analysis techniques to AI models
  • Testing/analysis AI infrastructures
  • Empirical study of related topics


Papers must conform to the ACM conference format and in PDF. Papers will be reviewed in a double-blinded process. Papers can be submitted via EasyChair ( In particular, the following LaTeX code should be placed at the start of the LaTeX document.