EASE 2024
Tue 18 - Fri 21 June 2024 Salerno, Italy
Thu 20 Jun 2024 10:30 - 11:00 at Room Capri - Poster Exhibition

The usage of Large Language Models is already well understood in software engineering and security and privacy. Yet, little is known about the effectiveness of LLMs in threat validation or the possibility of biased output when assessing security threats for correctness. To mitigate this research gap, we present a pilot study investigating the effectiveness of chatGPT in the validation of security threats. One main observation made from the results was that chatGPT assessed bogus threats as realistic regardless of the assumptions provided which negated the feasibility of certain threats occurring.

Thu 20 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 11:00
Poster ExhibitionPosters at Room Capri
10:30
30m
Poster
Automatic detection and correction of code errors applying machine learning - current research state
Posters
Aneta Poniszewska-Maranda Institute of Information Technology, Lodz University of Technology, Wiktoria Sarniak Institute of Information Technology, Lodz University of Technology, Marcin Cegielski Institute of Information Technology, Lodz University of Technology
10:30
30m
Poster
Automated Software Vulnerability Detection in Statement Level using Vulnerability Reports
Posters
Rabaya Sultana Mim Institute of Information Technology, University of Dhaka, Toukir Ahammed Institute of Information Technology, University of Dhaka, Kazi Sakib Institute of Information Technology, University of Dhaka
10:30
30m
Poster
New experimental design to capture bias using LLM to validate security threats
Posters
Winnie Bahati Mbaka Vrije Universiteit