EASE 2024
Tue 18 - Fri 21 June 2024 Salerno, Italy

Accepted Papers

Title
A Catalog of Transformations to Remove Test Smells in Natural Language TestsDistinguished Paper Award
Research Papers
Pre-print
Adversarial Attack and Robustness Improvement on Code Summarization
Research Papers
Analyzing Prerequistes of known Deserialization Vulnerabilities on Java Applications
Research Papers
An Empirically Grounded Reference Architecture for Software Supply Chain Metadata Management
Research Papers
An Empirical Study on Code Coverage of Performance Testing
Research Papers
An Empirical Study on the Energy Usage and Performance of Pandas and Polars Data Analysis Python Libraries
Research Papers
An Extensive Comparison of Static Application Security Testing Tools
Research Papers
Pre-print
A Performance Study of LLM-Generated Code on Leetcode
Research Papers
Pre-print
A Quantitative Investigation of Trends in Confusing Variable Pairs Through Commits: Do Confusing Variable Pairs Survive?
Research Papers
Code Summarization without Direct Access to Code - Towards Exploring Federated LLMs for Software Engineering
Research Papers
Context Switch Sensitive Fault LocalizationDistinguished Paper Award
Research Papers
Data Quality Assessment in the Wild: Findings from GitHub
Research Papers
Does trainer gender make a difference when delivering phishing training? A new experimental design to capture bias
Research Papers
How Much Logs Does My Source Code File Need? Learning to Predict the Density of Logs
Research Papers
How the Training Procedure Impacts the Performance of Deep Learning-based Vulnerability Patching
Research Papers
Improving classifier-based effort-aware software defect prediction by reducing ranking errors
Research Papers
Issues and Their Causes in WebAssembly Applications: An Empirical Study
Research Papers
Link to publication Pre-print Media Attached
LEGION: Harnessing Pre-trained Language Models for GitHub Topic Recommendations with Distribution-Balance Loss
Research Papers
Pre-print
Leveraging Statistical Machine Translation for Code Search
Research Papers
LLM-Based Chatbots for Mining Software Repositories: Challenges and Opportunities
Research Papers
"Looks Good To Me ;-)": Assessing Sentiment Analysis Tools for Pull Request Discussions
Research Papers
Motivation Research Using Labeling Functions
Research Papers
Mutation Testing for Task-Oriented Chatbots
Research Papers
On the Accuracy of GitHub's Dependency Graph
Research Papers
Reality Check: Assessing GPT-4 in Fixing Real-World Software Vulnerabilities
Research Papers
The Promise and Challenges of using LLMs to Accelerate the Screening Process of Systematic Reviews
Research Papers
Pre-print
Towards Comprehending Energy Consumption of Database Management Systems - A Tool and Empirical Study
Research Papers
DOI
Towards Semi-Automated Merge Conflict Resolution: Is It Easier Than We Expected?Distinguished Paper Award
Research Papers
Trustworthy AI in practice: an analysis of practitioners' needs and challenges
Research Papers
Understanding Logical Expressions with Negations: Its Complicated
Research Papers
Using Large Language Models to Generate JUnit Tests: An Empirical Study
Research Papers
Pre-print
VulDL: Tree-based and Graph-based Neural Networks for Vulnerability Detection and Localization
Research Papers

Call for Papers

The International Conference on Evaluation and Assessment in Software Engineering (EASE) is one of the premier conferences for research related to empirical software engineering. The EASE research track seeks high-quality submissions of technical research papers describing original and unpublished results.

Topics and Methods

EASE welcomes papers addressing topics related to evaluating and assessing software products, processes, practices, tools & techniques, including:

  • Meta-science (e.g., papers about research methods and methodological issues, whether empirical or conceptual)
    • Infrastructure and techniques for conducting empirical studies on SE.
    • Theory development, operationalization, testing, and application.
  • Applications of evaluation and assessment in specific contexts, such as:
    • Software Requirements
    • Software Architecture
    • Software Design
    • Software Construction
    • Software Testing
    • Software Engineering Operations
    • Software Maintenance
    • Software Configuration Management
    • Software Engineering Management
    • Software Engineering Models and Methods
    • Software Engineering Process
    • Software Quality
    • Software Security
    • Software Engineering Economics
    • Software Engineering Professional Practice
    • Computing Foundations
    • Human factors and behavioral aspects of SE
  • Inter- or multi-disciplinary studies intersecting software engineering
  • Evaluation and comparison of technologies and approaches (e.g., IoT, Context– Awareness, Cyber-physical)

Similarly, EASE welcomes papers employing any of the following empirical methods in SE:

  • Action Research
  • Benchmarking
  • Case Study
  • Case Survey
  • Data Science
  • Engineering Research (aka design as research, design science)
  • Experiment with human participants
  • Grounded Theory
  • Longitudinal Study
  • Meta-science
  • Mixed Methods (also select methods that were mixed)
  • Optimization Studies
  • Qualitative Survey (i.e., interview study)
  • Quantitative Simulation
  • Questionnaire Survey (quantitative)
  • Repository Mining
  • Systematic Literature Review
  • Mixed methods and multi-methodology
  • Replication studies

EASE also welcomes studies with negative findings or non-significant results.

How to submit

All papers must be submitted in PDF format through the web-based submission system https://easychair.org/conferences/?conf=ease2024 Submissions must not exceed 10 pages, including all figures, tables, references, and appendices.

All submissions should use the official ACM Primary Article Template (https://www.acm.org/publications/proceedings-template). Deviating from the ACM formatting instructions may lead to a desk rejection. LaTeX users should use the following options:

\documentclass[sigconf,review,anonymous]{acmart}
\acmConference[EASE 2024]{The 28th International Conference on Evaluation and Assessment in Software Engineering}{18–21 June, 2024}{Salerno, Italy}

Authors must comply with the SIGSOFT Open Science Policy https://github.com/acmsigsoft/open-science-policies/blob/master/sigsoft-open-science-policies.md (i.e., to archive data and artifacts in a permanent repository—e.g., Zenodo, not GitHub—to the extent ethically and practically possible, and include links in a Data Availability section in their manuscripts).

EASE 2024 will employ a double-anonymous review process. Do not include author names or affiliations in submissions. All references to the author’s prior work should be in the third person. Any online supplements, replication packages, etc., referred to in the work should also be anonymized. Advice for sharing supplements anonymously can be found here https://ineed.coffee/post/how-to-disclose-data-for-double-blind-review-and-make-it-archived-open-data-upon-acceptance

By submitting to EASE, authors agree to the ACM Policy and Procedures on Plagiarism, Misrepresentation, and Falsification https://www.acm.org/publications/policies/plagiarism-overview. Papers submitted to EASE must not be published or under review elsewhere. The Program Chairs may use plagiarism detection software under contract to the ACM. If the research involves human participants/subjects, the authors must adhere to the ACM Publications Policy on Research Involving Human Participants and Subjects https://www.acm.org/publications/policies/research-involving-human-participants-and-subjects.

Review criteria

Papers selected for the field experiment will receive two standards-based reviews. To see the criteria for standards-based reviews will use this tool. https://acmsigsoft.github.io/EmpiricalStandards/form_generator/Checklist.html?role=author. Read the introductory paragraph carefully. Select the method(s) used in your submission. Click “submit.” Going through this checklist is a great way to optimize your paper for EASE.

All papers will receive free-text reviews like in previous years. These reviews will evaluate submissions against the following criteria:

  • Soundness: The extent to which the paper’s contributions and/or innovations address its research questions and are supported by rigorous application of appropriate research methods
  • Significance: The extent to which the paper’s contributions can impact the field of software engineering and under which assumptions (if any)
  • Novelty: The extent to which the contributions are sufficiently original with respect to the state-of-the-art
  • Verifiability and Transparency: The extent to which the paper includes sufficient information to understand how an innovation works; how data was obtained, analyzed, and interpreted; and how the paper supports independent verification or replication of the paper’s claimed contributions.
  • Presentation: The extent to which the paper’s quality of writing meets the high standards of EASE, including clear descriptions, as well as adequate use of the English language, absence of major ambiguity, clearly readable figures and tables, and adherence to the formatting instructions provided above.

Important dates

Abstract Submission deadline: 11 January 2024, AoE
Submission deadline: 18 January 2024, AoE
Notification: 6 March 2024, AoE
Camera-ready: 26 April 2024, AoE
Early registration deadline: 5 May 2024, AoE

Conference Attendance Expectation

If a submission is accepted, at least one author of the paper is required to register for the conference and present the paper.