ICST 2025
Mon 31 March - Fri 4 April 2025 Naples, Italy

The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML 2025), co-located with the 18th IEEE International Conference on Software Testing, Verification and Validation (ICST 2025), will be a physical event (Mon 31 March - Fri 4 April 2025) and will be held in Naples, Italy.

The International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML) addresses the critical challenges at the intersection of Machine Learning (ML) and software testing. As ML systems are increasingly adopted across various sectors, concerns about privacy breaches, security vulnerabilities, and biased decision-making have grown.

This workshop focuses on developing innovative methods, tools, and techniques to comprehensively test and validate the security aspects of ML systems, ensuring their safe and reliable deployment. SAFE-ML seeks to foster discussion and drive the creation of solutions that streamline the testing of ML systems from multiple perspectives.

Call for Papers

Machine Learning (ML) models are becoming deeply integrated into our daily lives, with their use expected to expand even further in the coming years. However, as these models grow in importance, potential vulnerabilities — such as biased decision-making and privacy breaches — could result in serious unintended consequences.

The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML 2025) aims to bring together experts from industry and academia, with software testing and ML backgrounds, to discuss and address these challenges. The focus will be on innovative methods and tools to ensure correctness, robustness, security, fairness of ML models and in decentralized learning schemes. Topics of the workshop will cover, but are not limited to:

  • Privacy preservation of ML models;
  • Adversarial robustness in ML models;
  • Security of ML models against poisoning attacks;
  • Ensuring fairness and mitigating bias in ML models;
  • Unlearning algorithms in ML;
  • Unlearning algorithms in decentralized learning schemes, such as Federated Learning (FL), gossip learning and split learning;
  • Secure aggregation in FL;
  • Robustness of FL models against malicious clients or model inversion attacks;
  • Fault tolerance and resilience to client dropouts in FL;
  • Secure model updates in FL;
  • Proof of client participation in FL,
  • Explainability and interpretability of ML algorithms;
  • ML accountability.

Submission Format

The submissions must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines.

Submissions may fall into the following categories:

  • Full Papers (up to 8 pages): Comprehensive presentations of mature research findings or industrial applications;
  • Short Papers (up to 4 pages): Explorations of emerging ideas or preliminary research results;
  • Position Papers (up to 2 pages): Statements outlining positions or open challenges that stimulate discussion and debate.

Submission site: https://easychair.org/my/conference?conf=icst2025. Please be sure to select The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning as track for you submission.

Workshop Format

This workshop is held as part of ICST 2025 and will be an in-person event held in Naples, Italy. For details see the main ICST website.

Accepted papers presentations will have the following duration, depending on paper type:

  • Full Papers: 22 minutes (including Q&A);
  • Short Papers: 15 minutes (including Q&A);
  • Position Papers: 7 minutes (including Q&A).

Panel Discussion SAFE-ML requires all presentations to be in-person.

Review Process

The review process will follow a single-blind format, meaning authors are not required to anonymize their submissions.

Important Dates

Paper Submission: 3rd January AoE, 2025

Decision Notification: 6th February, 2025

Camera-ready: 8th of March, 2025

Contacts

Any doubts or queries can be addressed to the General Co-Chairs using the following e-mails: