TCSE logo 
 Sigsoft logo
Sustainability badge
Fri 2 May 2025 12:15 - 12:30 at 215 - SE for AI with Quality 1 Chair(s): Chris Poskitt

In Federated Learning, clients train models on local data and send updates to a central server, which aggregates them into a global model using a fusion algorithm. This collaborative yet privacy-preserving training comes at a cost—FL developers face significant challenges in attributing global model predictions to specific clients. Localizing responsible clients is a crucial step towards (a) excluding clients primarily responsible for incorrect predictions and (b) encouraging clients who contributed highquality models to continue participating in the future. Existing ML explainability approaches are inherently inapplicable as they are designed for single-model, centralized training.

We introduce TraceFL, a fine-grained neuron provenance capturing mechanism that identifies clients responsible for the global model’s prediction by tracking the flow of information from individual clients to the global model. Since inference on different inputs activates a different set of neurons of the global model, TraceFL dynamically quantifies the significance of the global model’s neurons in a given prediction. It then selectively picks a slice of the most crucial neurons in the global model and maps them to the corresponding neurons in every participating client to determine each client’s contribution, ultimately localizing the responsible client. We evaluate TraceFL on six datasets, including two real-world medical imaging datasets and four neural networks, including advanced models such as GPT. TraceFL achieves 99% accuracy in localizing the responsible client in FL tasks spanning both image and text classification tasks. At a time when state-of-the-art ML debugging approaches are mostly domain-specific (e.g., image classification only), TraceFL is the first technique to enable highly accurate automated reasoning across a wide range of FL applications.

Fri 2 May

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
SE for AI with Quality 1Research Track at 215
Chair(s): Chris Poskitt Singapore Management University
11:00
15m
Talk
A Tale of Two DL Cities: When Library Tests Meet CompilerSE for AI
Research Track
Qingchao Shen Tianjin University, Yongqiang Tian , Haoyang Ma Hong Kong University of Science and Technology, Junjie Chen Tianjin University, Lili Huang College of Intelligence and Computing, Tianjin University, Ruifeng Fu Tianjin University, Shing-Chi Cheung Hong Kong University of Science and Technology, Zan Wang Tianjin University
11:15
15m
Talk
Iterative Generation of Adversarial Example for Deep Code ModelsSE for AIAward Winner
Research Track
Li Huang , Weifeng Sun , Meng Yan Chongqing University
11:30
15m
Talk
On the Mistaken Assumption of Interchangeable Deep Reinforcement Learning ImplementationsSE for AIArtifact-FunctionalArtifact-AvailableArtifact-Reusable
Research Track
Rajdeep Singh Hundal National University of Singapore, Yan Xiao Sun Yat-sen University, Xiaochun Cao Sun Yat-Sen University, Jin Song Dong National University of Singapore, Manuel Rigger National University of Singapore
Pre-print Media Attached File Attached
11:45
15m
Talk
µPRL: a Mutation Testing Pipeline for Deep Reinforcement Learning based on Real FaultsSE for AIArtifact-FunctionalArtifact-AvailableArtifact-Reusable
Research Track
Deepak-George Thomas Tulane University, Matteo Biagiola Università della Svizzera italiana, Nargiz Humbatova Università della Svizzera italiana, Mohammad Wardat Oakland University, USA, Gunel Jahangirova King's College London, Hridesh Rajan Tulane University, Paolo Tonella USI Lugano
Pre-print
12:00
15m
Talk
Testing and Understanding Deviation Behaviors in FHE-hardened Machine Learning ModelsSE for AI
Research Track
Yiteng Peng Hong Kong University of Science and Technology, Daoyuan Wu Hong Kong University of Science and Technology, Zhibo Liu Hong Kong University of Science and Technology, Dongwei Xiao Hong Kong University of Science and Technology, Zhenlan Ji The Hong Kong University of Science and Technology, Juergen Rahmel HSBC, Shuai Wang Hong Kong University of Science and Technology
12:15
15m
Talk
TraceFL: Interpretability-Driven Debugging in Federated Learning via Neuron ProvenanceSE for AIArtifact-FunctionalArtifact-AvailableArtifact-Reusable
Research Track
Waris Gill Virginia Tech, Ali Anwar University of Minnesota, Muhammad Ali Gulzar Virginia Tech
Pre-print
:
:
:
: