ICSE 2025
Sat 26 April - Sun 4 May 2025 Ottawa, Ontario, Canada
Fri 2 May 2025 12:00 - 12:15 at 215 - SE for AI with Quality 1 Chair(s): Chris Poskitt

Fully homomorphic encryption (FHE) is a promising cryptographic primitive that enables secure computation over encrypted data. A primary use of FHE is to support privacy-preserving machine learning (ML) on public cloud infrastructures. Despite the rapid development of FHE-based ML (or HE-ML) in recent years, the community still lacks a systematic understanding of their robustness.

In this paper, we aim to systematically test and understand the deviation behaviors of HE-ML models, where the same input causes deviant outputs between FHE-hardened models and their plaintext versions, leading to completely incorrect model predictions. To effectively uncover deviation-triggering inputs under the constraints of expensive FHE computation, we design a novel differential testing tool called HEDiff, which leverages the margin metric on the plaintext model as guidance to drive targeted testing on FHE models. For the identified deviation inputs, we further analyze them to determine whether they exhibit general noise patterns that are transferable. We evaluate HEDiff using three popular HE-ML frameworks, covering 12 different combinations of models and datasets. HEDiff successfully detected hundreds of deviation inputs across almost every tested FHE framework and model. We also quantitatively show that the identified deviation inputs are (visually) meaningful in comparison to regular inputs. Further schematic analysis reveals the root cause of these deviant inputs and allows us to generalize their noise patterns for more directed testing.

Fri 2 May

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
SE for AI with Quality 1Research Track at 215
Chair(s): Chris Poskitt Singapore Management University
11:00
15m
Talk
A Tale of Two DL Cities: When Library Tests Meet CompilerSE for AI
Research Track
Qingchao Shen Tianjin University, Yongqiang Tian , Haoyang Ma Hong Kong University of Science and Technology, Junjie Chen Tianjin University, Lili Huang College of Intelligence and Computing, Tianjin University, Ruifeng Fu Tianjin University, Shing-Chi Cheung Hong Kong University of Science and Technology, Zan Wang Tianjin University
11:15
15m
Talk
Iterative Generation of Adversarial Example for Deep Code ModelsSE for AIAward Winner
Research Track
Li Huang , Weifeng Sun , Meng Yan Chongqing University
11:30
15m
Talk
On the Mistaken Assumption of Interchangeable Deep Reinforcement Learning ImplementationsSE for AIArtifact-FunctionalArtifact-AvailableArtifact-Reusable
Research Track
Rajdeep Singh Hundal National University of Singapore, Yan Xiao Sun Yat-sen University, Xiaochun Cao Sun Yat-Sen University, Jin Song Dong National University of Singapore, Manuel Rigger National University of Singapore
Pre-print
11:45
15m
Talk
µPRL: a Mutation Testing Pipeline for Deep Reinforcement Learning based on Real FaultsSE for AIArtifact-FunctionalArtifact-AvailableArtifact-Reusable
Research Track
Deepak-George Thomas Tulane University, Matteo Biagiola Università della Svizzera italiana, Nargiz Humbatova Università della Svizzera italiana, Mohammad Wardat Oakland University, USA, Gunel Jahangirova King's College London, Hridesh Rajan Tulane University, Paolo Tonella USI Lugano
Pre-print
12:00
15m
Talk
Testing and Understanding Deviation Behaviors in FHE-hardened Machine Learning ModelsSE for AI
Research Track
Yiteng Peng Hong Kong University of Science and Technology, Daoyuan Wu Hong Kong University of Science and Technology, Zhibo Liu Hong Kong University of Science and Technology, Dongwei Xiao Hong Kong University of Science and Technology, Zhenlan Ji The Hong Kong University of Science and Technology, Juergen Rahmel HSBC, Shuai Wang Hong Kong University of Science and Technology
12:15
15m
Talk
TraceFL: Interpretability-Driven Debugging in Federated Learning via Neuron ProvenanceSE for AIArtifact-FunctionalArtifact-AvailableArtifact-Reusable
Research Track
Waris Gill Virginia Tech, Ali Anwar University of Minnesota, Muhammad Ali Gulzar Virginia Tech
Pre-print