ESEIW 2024
Sun 20 - Fri 25 October 2024 Barcelona, Spain

Background: The C/C++ languages hold significant importance in Software Engineering research because of their widespread use in practice. Numerous studies have utilized Machine Learning (ML) and Deep Learning (DL) techniques to detect Software Vulnerabilities (SVs) in code functions in these languages. However, the application of these techniques in function-level SV assessment has been largely unexplored. SV assessment is increasingly crucial as it provides detailed information about the exploitability, impacts, and severity of security defects, thereby aiding in their prioritization and remediation. Aims: We conduct the first empirical study to investigate and compare the performance of ML and DL models, many of which have been used for SV detection, for function-level SV assessment in C/C++. Method: Using 9,993 C/C++ vulnerable code functions, we evaluate the performance of six multi-class ML models and five multi-class DL models for function-level SV assessment based on the Common Vulnerability Scoring System (CVSS). We further explore multi-task learning, which can leverage common vulnerable code to predict all SV assessment outputs simultaneously in a single model, and compare the effectiveness and efficiency of this model type with those of the original multi-class models. Results: We show that ML has matching or even better performance compared to the multi-class DL models for function-level SV assessment with significantly less training time. Employing multi-task learning allows the DL models to perform significantly better, with an average of 8-22% increase in Matthews Correlation Coefficient (MCC), than the multi-class models. Conclusions: We distill the practices of using data-driven techniques for function-level SV assessment in C/C++, including using multi-task DL to balance efficiency and effectiveness. This can establish a strong foundation for future work in this area.

Thu 24 Oct

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

16:00 - 17:30
Software vulnerabilities and defectsESEM Technical Papers / ESEM Emerging Results, Vision and Reflection Papers Track / ESEM Journal-First Papers at Sala de graus (C4 Building)
Chair(s): Daniela Cruzes Norwegian University of Science and Technology
16:00
20m
Full-paper
Automated Code-centric Software Vulnerability Assessment: How Far Are We? An Empirical Study in C/C++
ESEM Technical Papers
Anh Nguyen The , Triet Le The University of Adelaide, Muhammad Ali Babar School of Computer Science, The University of Adelaide
DOI Pre-print
16:20
20m
Full-paper
Empirical Evaluation of Frequency Based Statistical Models for Estimating Killable Mutants
ESEM Technical Papers
Konstantin Kuznetsov Saarland University, CISPA, Alessio Gambi Austrian Institute of Technology (AIT), Saikrishna Dhiddi Passau University, Julia Hess Saarland University, Rahul Gopinath University of Sydney
16:40
20m
Full-paper
Reevaluating the Defect Proneness of Atoms of Confusion in Java Systems
ESEM Technical Papers
Guoshuai Shi University of Waterloo, Farshad Kazemi University of Waterloo, Michael W. Godfrey University of Waterloo, Canada, Shane McIntosh University of Waterloo
Pre-print
17:00
15m
Vision and Emerging Results
DetectBERT: Towards Full App-Level Representation Learning to Detect Android Malware
ESEM Emerging Results, Vision and Reflection Papers Track
Tiezhu Sun University of Luxembourg, Nadia Daoudi Luxembourg Institute of Science and Technology, Kisub Kim Singapore Management University, Singapore, Kevin Allix Independent Researcher, Tegawendé F. Bissyandé University of Luxembourg, Jacques Klein University of Luxembourg
17:15
15m
Journal Early-Feedback
Identifying concerns when specifying machine learning-enabled systems: A perspective-based approach
ESEM Journal-First Papers
Hugo Villamizar fortiss GmbH, Marcos Kalinowski Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Helio Côrtes Vieira Lopes PUC-Rio, Daniel Mendez Blekinge Institute of Technology and fortiss
DOI