EASE 2025
Tue 17 - Fri 20 June 2025 Istanbul, Turkey
Fri 20 Jun 2025 16:30 - 16:45 at Senate Hall - Model/Data Chair(s): Giusy Annunziata

Large Language Model (LLM)-generated data is increasingly used in software analytics, but it is unclear how this data compares to human-written data, particularly when models are exposed to adversarial scenarios. Adversarial attacks can compromise the reliability and security of software systems, so understanding how LLM-generated data performs under these conditions, compared to human-written data, which serves as the benchmark for model performance, can provide valuable insights into whether LLM-generated data offers similar robustness and effectiveness. To address this gap, we systematically evaluate and compare the quality of human-written and LLM-generated data for fine-tuning robust pre-trained models (PTMs) in the context of adversarial attacks. We evaluate the robustness of six widely used PTMs, fine-tuned on human-written and LLM-generated data, before and after adversarial attacks. This evaluation employs nine state-of-the-art (SOTA) adversarial attack techniques across three popular software analytics tasks: clone detection, code summarization, and sentiment analysis in code review discussions. Additionally, we analyze the quality of the generated adversarial examples using eleven similarity metrics. Our findings reveal that while PTMs fine-tuned on LLM-generated data perform competitively with those fine-tuned on human-written data, they exhibit less robustness against adversarial attacks in software analytics tasks. Our study underscores the need for further exploration into enhancing the quality of LLM-generated training data to develop models that are both high-performing and capable of withstanding adversarial attacks in software analytics.

Fri 20 Jun

Displayed time zone: Athens change

15:30 - 17:00
15:30
15m
Paper
A Unified Semantic Framework for IoT-Healthcare Data Interoperability: A Graph-Based Machine Learning Approach Using RDF and R2RML
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Mehran Pourvahab University of Beira Interior, NOVA LINCS, Covilhã, Portugal, Anilson Monteiro University of Beira Interior, NOVA LINCS, Covilhã, Portugal, Sebastião Pais University of Beira Interior, NOVA LINCS, Covilhã, Portugal, Nuno Pombo University of Beira Interior & Instituto de Telecomunicaçōes, Covilhã, Portugal
15:45
15m
Talk
ALOHA: A(IBoM) tooL generatOr for Hugging fAce
AI Models / Data
Riccardo D'Avino University of Salerno, Sabato Nocera University of Salerno, Daniele Bifolco University of Sannio, Federica Pepe University of Sannio, Massimiliano Di Penta University of Sannio, Italy, Giuseppe Scanniello University of Salerno
Pre-print
16:00
15m
Talk
Automatic Classification of Software Repositories: a Systematic Mapping Study
Research Papers
Stefano Balla DISI - Università di Bologna, Thomas Degueule CNRS, Romain Robbes CNRS, LaBRI, University of Bordeaux, Jean-Rémy Falleri Univ. Bordeaux, CNRS, Bordeaux INP, LaBRI, UMR 5800, Institut Universitaire de France, Stefano Zacchiroli LTCI, Télécom Paris, Institut Polytechnique de Paris, Palaiseau, France
Pre-print Media Attached File Attached
16:15
15m
Talk
BugsRepo: A Comprehensive Curated Dataset of Bug Reports, Comments and Contributors Information from Bugzilla
AI Models / Data
Jagrit Acharya University of Calgary, Gouri Ginde (Deshpande) University of Calgary
16:30
15m
Talk
Large Language Models as Robust Data Generators in Software Analytics: Are We There Yet?
AI Models / Data
Md. Abdul Awal University of Saskatchewan, Mrigank Rochan University of Saskatchewan, Chanchal K. Roy University of Saskatchewan
Pre-print