ICST 2025
Mon 31 March - Fri 4 April 2025 Naples, Italy

The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML 2025), co-located with the 18th IEEE International Conference on Software Testing, Verification and Validation (ICST 2025), will be a physical event (Mon 31 March - Fri 4 April 2025) and will be held in Naples, Italy. SAFE-ML 2025 is scheduled for 1st April 1 2025.

The International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML) addresses the critical challenges at the intersection of Machine Learning (ML) and software testing. As ML systems are increasingly adopted across various sectors, concerns about privacy breaches, security vulnerabilities, and biased decision-making have grown.

This workshop focuses on developing innovative methods, tools, and techniques to comprehensively test and validate the security aspects of ML systems, ensuring their safe and reliable deployment. SAFE-ML seeks to foster discussion and drive the creation of solutions that streamline the testing of ML systems from multiple perspectives.

Plenary
Hide plenary sessions

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 1 Apr

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

08:00 - 09:00
RegistrationSocial at Building Hall
08:00
60m
Registration
Registration
Social

09:00 - 09:10
09:00
10m
Day opening
Opening
SAFE-ML

09:10 - 10:30
KeynoteSAFE-ML at Aula Magna (AM)
Chair(s): Carlo Mazzocca Università di Salerno

Prof. Mauro Conti, “Brave New Threat: The Rise of Covert and Side Channels”

10:30 - 11:00
10:30
30m
Coffee break
Break
Social

11:00 - 12:20
Security and Privacy in Fedetated Learning SystemsSAFE-ML at Aula Magna (AM)
Chair(s): Carlo Mazzocca Università di Salerno

11:00Towards A Common Task Framework for Distributed Collaborative Machine Learning

Juan Manuel Baldonado, Flavia Bonomo-Braberman and Víctor Adrián Braberman


11:15Federated Learning under Attack: Game-Theoretic Mitigation of Data Poisoning

Marco De Santis and Christian Esposito


11:40Privacy-Preserving in Federated Learning: A Comparison Between Differential Privacy and Homomorphic Encryption Across Different Scenarios

Alessio Catalfamo, Maria Fazio, Antonio Celesti and Massimo Villari


11:55Exploring and Mitigating Gradient Leakage Vulnerabilities in Federated Learning

Harshit Gupta, Ghena Barakat, Luca D’Agati, Francesco Longo, Giovanni Merlino and Antonio Puliafito


12:30 - 14:00
LunchSocial at Room A3
12:30
90m
Lunch
Lunch
Social

14:00 - 15:30
Robustness, Verification, and Security in AI SystemsSAFE-ML at Aula Magna (AM)
Chair(s): Alessio Mora Alma Mater Studiorum - Università di Bologna

14:00 – Quantifying Correlations of Machine Learning Models

Yuanyuan Li, Neeraj Sarna and Yang Lin


14:25 – Structural Backdoor Attack on IoT Malware Detectors Via Graph Explainability

Yu-Cheng Chiu, Maina Bernard Mwangi, Shin-Ming Cheng and Hahn-Ming Lee


14:50 – Black-Box Multi-Robustness Testing for Neural Networks

Mara Downing and Tevfik Bultan


15:15 – Towards a Probabilistic Framework for Analyzing and Improving LLM-Enabled Software

Juan Manuel Baldonado, Flavia Bonomo-Braberman and Víctor Adrián Braberman


15:30 - 16:00
15:30
30m
Coffee break
Break
Social

16:00 - 16:15
16:00
15m
Day closing
Closing
SAFE-ML

Call for Papers

Machine Learning (ML) models are becoming deeply integrated into our daily lives, with their use expected to expand even further in the coming years. However, as these models grow in importance, potential vulnerabilities — such as biased decision-making and privacy breaches — could result in serious unintended consequences.

The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning (SAFE-ML 2025) aims to bring together experts from industry and academia, with software testing and ML backgrounds, to discuss and address these challenges. The focus will be on innovative methods and tools to ensure correctness, robustness, security, fairness of ML models and in decentralized learning schemes. Topics of the workshop will cover, but are not limited to:

  • Privacy preservation of ML models;
  • Adversarial robustness in ML models;
  • Security of ML models against poisoning attacks;
  • Ensuring fairness and mitigating bias in ML models;
  • Unlearning algorithms in ML;
  • Unlearning algorithms in decentralized learning schemes, such as Federated Learning (FL), gossip learning and split learning;
  • Secure aggregation in FL;
  • Robustness of FL models against malicious clients or model inversion attacks;
  • Fault tolerance and resilience to client dropouts in FL;
  • Secure model updates in FL;
  • Proof of client participation in FL,
  • Explainability and interpretability of ML algorithms;
  • ML accountability.

Submission Format

The submissions must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines. All the accepted papers will be published as proceedings through the IEEE Digital Library.

Submissions may fall into the following categories:

  • Full Papers (up to 8 pages, excluding references): Comprehensive presentations of mature research findings or industrial applications;
  • Short Papers (up to 4 pages, excluding references): Explorations of emerging ideas or preliminary research results;
  • Position Papers (up to 2 pages, excluding references): Statements outlining positions or open challenges that stimulate discussion and debate.

Submission site: https://easychair.org/my/conference?conf=icst2025. Please be sure to select The 1st International Workshop on Secure, Accountable, and Verifiable Machine Learning as track for you submission.

Workshop Format

This workshop is held as part of ICST 2025 and will be an in-person event held in Naples, Italy. For details see the main ICST website.

Accepted papers presentations will have the following duration, depending on paper type:

  • Full Papers: 22 minutes (including Q&A);
  • Short Papers: 15 minutes (including Q&A);
  • Position Papers: 7 minutes (including Q&A).

Panel Discussion SAFE-ML requires all presentations to be in-person.

Review Process

The review process will follow a single-blind format, meaning authors are not required to anonymize their submissions.

Important Dates

Paper Submission: 10th January AoE, 2025

Decision Notification: 6th February, 2025

Camera-ready: 8th of March, 2025

Contacts

Any doubts or queries can be addressed to the General Co-Chairs using the following e-mails:

Title Authors Paper Type
Structural Backdoor Attack on IoT Malware Detectors Via Graph Explainability Yu-Cheng Chiu, Maina Bernard Mwangi, Shin-Ming Cheng and Hahn-Ming Lee Full Paper
Quantifying Correlations of Machine Learning Models Yuanyuan Li, Neeraj Sarna and Yang Lin Full Paper
Black-Box Multi-Robustness Testing for Neural Networks Mara Downing and Tevfik Bultan Full Paper
Federated Learning under Attack: Game-Theoretic Mitigation of Data Poisoning Marco De Santis and Christian Esposito Full Paper
Privacy-Preserving in Federated Learning: A Comparison Between Differential Privacy and Homomorphic Encryption Across Different Scenarios Alessio Catalfamo, Maria Fazio, Antonio Celesti and Massimo Villari Full Paper
Towards a Probabilistic Framework for Analyzing and Improving LLM-Enabled Software Juan Manuel Baldonado, Flavia Bonomo-Braberman and Víctor Adrián Braberman Short Paper
Towards A Common Task Framework for Distributed Collaborative Machine Learning Qianying Liao, Dimitri Van Landuyt, Davy Preuveneers and Wouter Joosen Short Paper
Exploring and Mitigating Gradient Leakage Vulnerabilities in Federated Learning Harshit Gupta, Ghena Barakat, Luca D’Agati, Francesco Longo, Giovanni Merlino and Antonio Puliafito Short Paper

Brave New Threat: The Rise of Covert and Side Channels

Abstract

Information and Communication Technologies are deeply integrated into our lives and manage an increasing amount of our confidential data. We use these technologies in a variety of ways—sometimes even unconsciously—for our work, to interact with other people, or just for entertainment through games and music. Protecting the data these technologies handle involves more than just preventing adversaries from gaining physical or remote control of a device through traditional attacks, such as exploiting software or protocol vulnerabilities. It also includes addressing how adversaries might steal information through side and covert channels.

In this talk, we take a journey through representative research results we published in the domain of side and covert channels, ranging from work published in TIFS 2016 to more recent ones published in Usenix Security 2022, INFOCOM 2023, CCS 2023, DIMVA 2024, WWW 2024, some of which were also demonstrated at Black Hat Hacking Conferences. We discuss threats arising from contextual information and to which extent it is feasible to infer very specific details. In particular, we discuss attacks such as:

  • Inferring user actions on a smartphone by eavesdropping on its encrypted network traffic.
  • Identifying the presence of a specific user within a network through energy consumption analysis.
  • Inferring information (including sensitive details like passwords and PINs) using timing, acoustic, video, or battery status information.
  • Analyzing the way users play games and listen to music to extract valuable insights.

Speaker Bio

Mauro Conti is a Full Professor at the University of Padua, Italy. He is also affiliated with the University of Washington, Seattle, and serves as a Wallenberg WASP Guest Professor at Örebro University, Sweden. He obtained his Ph.D. from Sapienza University of Rome, Italy, in 2009. After his Ph.D., he was a Post-Doc Researcher at Vrije Universiteit Amsterdam, The Netherlands. In 2011, he joined the University of Padua as an Assistant Professor, where he became Associate Professor in 2015 and Full Professor in 2018.

Mauro has been a Visiting Researcher at GMU, UCLA, UCI, TU Darmstadt, UF, FIU, and an Affiliate Professor at TU Delft. He has received prestigious fellowships, including:

  • Marie Curie Fellowship (2012) from the European Commission.
  • DAAD Fellowship (2013) from the German Academic Exchange Service.
  • WASP Fellowship (2025) from the Knut and Alice Wallenberg Foundation.

His research is also supported by industry partners, including Cisco, Intel, and Huawei. His main research focus is Security and Privacy, where he has published over 600 papers in top-tier international journals and conferences.

Editorial & Conference Roles

  • Editor-in-Chief for IEEE Communications Surveys & Tutorials.
  • Former Editor-in-Chief for IEEE Transactions on Information Forensics and Security (2022-24).
  • Associate Editor for IEEE Transactions on Dependable and Secure Computing, IEEE Transactions on Information Forensics and Security, and IEEE Transactions on Network and Service Management.
  • Program Chair for TRUST 2015, ICISS 2016, WiSec 2017, ACNS 2020, CANS 2021, CSS 2021, WiMob 2023, and ESORICS 2023.
  • General Chair for SecureComm 2012, SACMAT 2013, NSS 2021, ACNS 2022, RAID 2024, NDSS 2026 and 2027.

Recognitions & Honors

  • IEEE Fellow
  • AAIA Fellow
  • Distinguished Member of the ACM
  • Fellow of the Young Academy of Europe
  • Knight of the Order of Merit of the Italian Republic (2022), awarded by the President of the Republic.
:
: