DeepTest 2021
Tue 1 Jun 2021
co-located with ICSE 2021

After running as one of the most successful workshops at ICSE 2020, the International Workshop on Testing for Deep Learning and Deep Learning for Testing (DeepTest) returns once more as a co-located workshop at the ACM/IEEE International Conference on Software Engineering (ICSE) in 2021!

DeepTest is a high-quality workshop for research at the intersection of Machine Learning (ML) and software engineering (SE). ML is widely adopted in modern software systems, including safety-critical domains such as autonomous cars, medical diagnosis, and aircraft collision avoidance systems. Thus, it is crucial to rigorously test such applications to ensure high dependability. However, standard notions of software quality and reliability become irrelevant when considering ML systems, due to their non-deterministic nature and the lack of a transparent understanding of the models’ semantics. ML is also expected to revolutionize software development. Indeed, ML is being applied for devising novel program analysis and software testing techniques related to malware detection, fuzzy testing, bug-finding, and type-checking.

The workshop will combine academia and industry in a quest for well-founded practical solutions. The aim is to bring together an international group of researchers and practitioners with both ML and SE backgrounds to discuss their research, share datasets, and generally help the field to build momentum. The workshop will consist of invited talks, presentations based on research paper submissions, and one or more panel discussions, where all participants are invited to share their insights and ideas.

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone - change time zone

Tue 1 Jun
Times are displayed in time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:00 - 12:00
Session 1deeptest2021 at DeepTest Room
Chair(s): Gunel JahangirovaUSI Lugano, Switzerland
A Review and Refinement of Surprise Adequacy
Michael WeissUniversità della Svizzera Italiana (USI), Rwiddhi ChakrabortyUSI Lugano, Switzerland, Paolo TonellaUSI Lugano, Switzerland
Deep Learning-Based Prediction of Test Input Validity for RESTful APIs
Agatino Giuliano MirabellaUniversidad de Sevilla, Alberto Martin-LopezUniversidad de Sevilla, Sergio SeguraUniversidad de Sevilla, Luis Valencia-CabreraUniversidad de Sevilla, Antonio Ruiz-CortésUniversity of Seville
Open Discussion & Q/A
13:00 - 14:00
Session 2deeptest2021 at DeepTest Room
Chair(s): Vincenzo RiccioUSI Lugano, Switzerland
Machine Learning Model Drift Detection Via Weak Data Slices
Oma RazIBM Research, Samuel AckermanIBM Corporation, Israel, Parijat DubeIBM, USA, Eitan FarchiIBM Haifa Research Lab, Marcel Zalmanovici
TF-DM: Tool for Studying ML Model Resilience to Data Faults
Niranjhana NarayananThe University of British Columbia, Karthik PattabiramanUniversity of British Columbia
Open Discussion & Q/A
14:30 - 16:00
Session 3deeptest2021 at DeepTest Room
Chair(s): Onn ShehoryBar Ilan University

Call for Papers

DeepTest is an interdisciplinary workshop targeting research at the intersection of SE and ML. We welcome submissions that investigate:

  • how to ensure the quality of ML-based applications, both at a model level and at a system level
  • the use of ML to support software engineering tasks, particularly software testing

Relevant topics include, but are not limited to:


  • Quality implication of ML algorithms on large-scale software systems
  • Application of classical statistics to ML systems quality
  • Training and payload data quality
  • Correctness of data abstraction, data trust
  • High-quality benchmarks for evaluating ML approaches

Testing and Verification

  • Test data synthesis for testing ML systems
  • White-box and black-box testing strategies
  • ML models for testing programs
  • Adversarial machine learning and adversary based learning
  • Test coverage
  • Vulnerability, sensitivity, and attacks against ML
  • Metamorphic testing as software quality assurance
  • New abstraction techniques for verification of ML systems
  • ML techniques for software verification
  • Dev-ops for ML

Fault Localization, Debugging, and Repairing

  • Quality Metrics for ML systems, e.g., Correctness, Accuracy, Fairness, Robustness, Explainability
  • Sensitivity to data distribution diversity and distribution drift
  • Failure explanation and automated debugging techniques
  • Runtime monitoring
  • Fault Localization and anomaly detection
  • Model repairing
  • The effect of labeling costs on solution quality (semi-supervised learning)
  • ML for fault prediction, localization, and repair
  • ML to aid program comprehension, program transformation, and program generation

We accept two types of submissions:

  • full research papers up to 8-page papers describing original and unpublished results related to the workshop topics.
  • short papers up to 4-page papers describing both preliminary work, new insights in previous work, or demonstrations of testing-related tools and prototypes.

All submissions must conform to the ICSE 2021 formatting instructions. All submissions must be in PDF. The page limit is strict. Submissions must conform to the IEEE formatting instructions IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf options). DeepTest 2021 will employ a double-blind review process. Thus, no submission may reveal its authors’ identities. The authors must make every effort to honor the double-blind review process. In particular, the authors’ names must be omitted from the submission, and references to their prior work should be in the third person.

If you have any questions or wonder whether your submission is in scope, please do not hesitate to contact the organizers.

Special Issue

Authors of selected papers accepted at DeepTest 2021 will be invited to submit revised, extended versions of their manuscripts for a special issue of Empirical Software Engineering (EMSE), edited by Springer. We will post additional details about this call in the future.

Questions? Use the DeepTest contact form.