EASE 2024
Tue 18 - Fri 21 June 2024 Salerno, Italy
Thu 20 Jun 2024 10:25 - 10:30 at Room Vietri - Lightning talks of the posters Chair(s): Anna Rita Fasolino

The usage of Large Language Models is already well understood in software engineering and security and privacy. Yet, little is known about the effectiveness of LLMs in threat validation or the possibility of biased output when assessing security threats for correctness. To mitigate this research gap, we present a pilot study investigating the effectiveness of chatGPT in the validation of security threats. One main observation made from the results was that chatGPT assessed bogus threats as realistic regardless of the assumptions provided which negated the feasibility of certain threats occurring.

Thu 20 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:10 - 10:30
Lightning talks of the postersPosters at Room Vietri
Chair(s): Anna Rita Fasolino Federico II University of Naples
10:10
5m
Introduction
Posters
Anna Rita Fasolino Federico II University of Naples
10:15
5m
Talk
Automated Software Vulnerability Detection in Statement Level using Vulnerability Reports
Posters
Rabaya Sultana Mim Institute of Information Technology, University of Dhaka, Toukir Ahammed Institute of Information Technology, University of Dhaka, Kazi Sakib Institute of Information Technology, University of Dhaka
10:20
5m
Talk
Automatic detection and correction of code errors applying machine learning - current research state
Posters
Aneta Poniszewska-Maranda Institute of Information Technology, Lodz University of Technology, Wiktoria Sarniak Institute of Information Technology, Lodz University of Technology, Marcin Cegielski Institute of Information Technology, Lodz University of Technology
10:25
5m
Talk
New experimental design to capture bias using LLM to validate security threats
Posters
Winnie Bahati Mbaka Vrije Universiteit