ESEIW 2024
Sun 20 - Fri 25 October 2024 Barcelona, Spain

This program is tentative and subject to change.

Thu 24 Oct 2024 11:40 - 12:00 at Aula de graus (C4 Building) - Software testing Chair(s): Marco Torchiano

Background: Software Vulnerability (SV) assessment is increasingly adopted to address the ever-increasing volume and complexity of SVs. Data-driven approaches have been widely used to automate SV assessment tasks, particularly the prediction of the Common Vulnerability Scoring System (CVSS) metrics such as exploitability, impact, and severity. SV assessment suffers from the imbalanced distributions of the CVSS classes, but such data imbalance has been hardly understood and addressed in the literature. Aims: We conduct a large-scale study to quantify the impacts of data imbalance and mitigate the issue for SV assessment through the use of data augmentation. Method: We leverage nine data augmentation techniques to balance the class distributions of the CVSS metrics. We then compare the performance of SV assessment models with and without leveraging the augmented data. Results: Through extensive experiments on 180k+ real-world SVs, we show that mitigating data imbalance can significantly improve the predictive performance of models for all the CVSS tasks, by up to 31.8% in Matthews Correlation Coefficient. We also discover that simple text augmentation like combining random text insertion, deletion, and replacement can outperform the baseline across the board. Conclusions: Our study provides the motivation and the first promising step toward tackling data imbalance for effective SV assessment.

This program is tentative and subject to change.

Thu 24 Oct

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

11:00 - 12:30
11:00
20m
Full-paper
Contexts Matter: An Empirical Study on Contextual Influence in Fairness Testing for Deep Learning Systems
ESEM Technical Papers
Chengwen Du University of Birmingham, Tao Chen University of Birmingham
11:20
20m
Full-paper
Automatic Data Labeling for Software Vulnerability Prediction Models: How Far Are We?
ESEM Technical Papers
Triet Le The University of Adelaide, Muhammad Ali Babar School of Computer Science, The University of Adelaide
11:40
20m
Full-paper
Mitigating Data Imbalance for Software Vulnerability Assessment: Does Data Augmentation Help?
ESEM Technical Papers
Triet Le The University of Adelaide, Muhammad Ali Babar School of Computer Science, The University of Adelaide
12:00
15m
Industry talk
From Literature to Practice: Exploring Fairness Testing Tools for the Software Industry Adoption
ESEM IGC
Thanh Nguyen University of Calgary, Maria Teresa Baldassarre Department of Computer Science, University of Bari , Luiz Fernando de Lima , Ronnie de Souza Santos University of Calgary
Pre-print
12:15
15m
Vision and Emerging Results
Do Developers Use Static Application Security Testing (SAST) Tools Straight Out of the Box? A large-scale Empirical Study
ESEM Emerging Results, Vision and Reflection Papers Track
Gareth Bennett Lancaster University, Tracy Hall Lancaster University, Steve Counsell Brunel University London, Emily Winter Lancaster University, Thomas Shippey LogicMonitor