EASE 2025
Tue 17 - Fri 20 June 2025 Istanbul, Turkey

Call for Papers

Artificial Intelligence (AI) has gained significant traction in the Software Engineering (SE) domain in recent years, and data is the key requirement and driver for the success of these models. The inaugural EASE 2025 – AI Models/Data track invites submissions on the design, development, and evaluation of datasets and AI-based models that can provide knowledge and automation support for different SE tasks.

Topics of Interest

We welcome two types of submissions in this track: (1) data papers and (2) AI model papers that are relevant to software engineering. The topics of interest for each paper type are but not limited to:

Data Papers

  • Curation of new dataset(s) with a clear description/evaluation of how the curated dataset(s) can be used for one or more tasks in the software development lifecycle, including requirements, design, implementation, testing, and maintenance
  • Empirical evaluation of different quality attributes of existing dataset(s) for SE
  • Data quality assessment of existing/new dataset(s) for SE (e.g., detecting the data-related issues that can affect downstream models)
  • Data quality assurance of existing/new dataset(s) for SE (e.g., mitigating data-related issues and the impact/implications)
  • Innovative methods and tools for leveraging existing/new datasets to improve SE

AI Model Papers

  • Design, development, and evaluation of new AI models, including Large Language Models and Foundation Models, for SE
  • Processes and tools to support the design, development, testing, deployment, and monitoring of AI models for SE (e.g., MLOps)
  • Empirical evaluation of different quality attributes of AI models for SE hosted on model repositories (e.g., Hugging Face)
  • Replication and reproduction of existing AI models for SE with clear new insights compared to the original study. Trivial application of existing models will not be accepted.
  • Empirical studies and/or new tools for generating, understanding, and supporting AI-BoM (Bill of Materials)
  • Industrial evaluation of usage and adoption of AI models for SE
  • Human factors and cognitive biases affecting the usage and adoption of AI models for SE
  • Ethical considerations and evaluation of AI models for SE

Note that a submission encapsulating both types of papers is also welcomed.

The dataset(s) and AI model(s) must be made available at the time of submission of the paper for review but will be kept confidential until publication of the paper. If the data includes personally identifiable data, such information needs to be anonymized before sharing. The submissions need to also include detailed instructions about how to set up the environment (e.g., requirements.txt), and how to use the dataset or model (e.g., how to import the data or how to access the data once it has been imported, how to run the model). Upon publication of the paper, the authors should archive the data or tool on a persistent repository that can provide a digital object identifier (DOI), such as figshare.com, zenodo.org, Archive.org, or institutional repositories.

Submission Guidelines

We invite submissions conforming to the following guidelines:

  • All submissions should follow a page limit of 10 pages and a minimum of 4 pages, including the bibliography.
  • All submissions should be submitted in PDF format through EasyChair for EASE 2025.
  • All submissions should use the official ACM Primary Article Template. LaTeX users should use the following options:
\documentclass[sigconf,review,anonymous]{acmart}
\acmConference[EASE 2025]{The 29th International Conference on Evaluation and Assessment in Software Engineering}{17–20 June, 2025}{Istanbul, Türkiye}
  • We will employ a double-anonymous review process. The authors should not include their names or affiliations in submissions. Any online supplements, replication packages, etc., referred to in the work should also be anonymized.

Review Criteria

All the submissions will be evaluated based on the following criteria:

  • Soundness: Rigor in research methods and reflection of challenges faced and learnings gained.
  • Significance: Potential impact in the corresponding domain and applicability of the lessons shared.
  • Novelty: Originality in addressing challenges or presenting new approaches to evaluation and assessment.
  • Verifiability and Transparency: Sufficient detail to support replication or independent verification.
  • Presentation: Clarity, organization, and adherence to formatting and language standards.

Conference Attendance Expectation

At least one author of each accepted paper must register and present. The proceedings will be published in the selected digital library.