ICST 2026
Mon 18 - Fri 22 May 2026 Daejeon, South Korea

The 2nd International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML 2026), co-located with the 19th IEEE International Conference on Software Testing, Verification and Validation (ICST 2026), will be a physical event (Mon 18 - Fri 22 May 2026) and will be held in Daejeon, South Korea. SAFE-ML 2025 is scheduled for 22nd May 2026.

The International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML) addresses the critical challenges at the intersection of Machine Learning (ML) and software testing. As ML systems are increasingly adopted across various sectors, concerns about privacy breaches, security vulnerabilities, and biased decision-making have grown.

This workshop focuses on developing innovative methods, tools, and techniques to comprehensively test and validate the security aspects of ML systems, ensuring their safe and reliable deployment. SAFE-ML seeks to foster discussion and drive the creation of solutions that streamline the testing of ML systems from multiple perspectives.

Contacts

Any doubts or queries can be addressed to the General Co-Chairs using the following e-mails:

Call for Papers

Machine Learning (ML) models are becoming deeply integrated into our daily lives, with their use expected to expand even further in the coming years. However, as these models grow in importance, potential vulnerabilities — such as biased decision-making and privacy breaches — could result in serious unintended consequences.

The 2nd International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML 2026) aims to bring together experts from industry and academia, with software testing and ML backgrounds, to discuss and address these challenges. The focus will be on innovative methods and tools to ensure correctness, robustness, security, fairness of ML models and in decentralized learning schemes. Topics of the workshop will cover, but are not limited to:

  • Privacy preservation of ML models;
  • Adversarial robustness in ML models;
  • Security of ML models against poisoning attacks;
  • Ensuring fairness and mitigating bias in ML models;
  • Unlearning algorithms in ML;
  • Security, robustness, and privacy for Large Language Models (LLMs);
  • Unlearning algorithms in decentralized learning schemes, such as Federated Learning (FL), gossip learning and split learning;
  • Secure aggregation in FL;
  • Robustness of FL models against malicious clients or model inversion attacks;
  • Secure model updates in FL;
  • Explainability and interpretability of ML algorithms;
  • ML accountability.

Submission Format

The submissions must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines. All the accepted papers will be published as proceedings through the IEEE Digital Library.

Submissions may fall into the following categories:

  • Full Papers (up to 8 pages, excluding references): Comprehensive presentations of mature research findings or industrial applications;
  • Short Papers (up to 4 pages, excluding references): Explorations of emerging ideas or preliminary research results;
  • Position Papers (up to 2 pages, excluding references): Statements outlining positions or open challenges that stimulate discussion and debate.

Submission Site

Submission site: https://easychair.org/conferences/?conf=safeml2026

Important Dates

Workshop Submission Due: 6 March 2026

Workshop Notification: 27 March 2026

Workshop Camera-ready Due: 10 April 2026

Workshop: 22 May 2026

Contacts

Any doubts or queries can be addressed to the General Co-Chairs using the following e-mails: