EASE 2025
Tue 17 - Fri 20 June 2025 Istanbul, Turkey
Dates
Wed 18 Jun 2025
Fri 20 Jun 2025
Tracks
EASE AI Models / Data
EASE Catering
EASE Industry Papers
EASE Journal-first
EASE Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
EASE Posters and Vision
EASE Research Papers
EASE Short Papers, Emerging Results
Plenary
Hide plenary sessions
You're viewing the program in a time zone which is different from your device's time zone change time zone

Wed 18 Jun

Displayed time zone: Athens change

11:00 - 12:30
11:15
15m
Paper
Actual Practices from Practitioners in Benefits Management in Digitalization Projects
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
J. David Patón-Romero Simula Metropolitan Center for Digital Engineering, Bertha J. Ngereja Simula Metropolitan Center for Digital Engineering (SimulaMet), Jo Hannay Simula Research Laboratory, Magne Jørgensen Simula Metropolitan Center for Digital Engineering
13:30 - 15:00
13:30
15m
Paper
Exploring turnover, retention and growth in an OSS Ecosystem
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Tien Rahayu Tulili University of Groningen, Ayushi Rastogi University of Groningen, The Netherlands, Andrea Capiluppi University of Groningen
Pre-print
14:15
15m
Paper
Integrating Human Feedback into a Reinforcement Learning-Based Framework for Adaptive User Interfaces
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Daniel Gaspar Figueiredo Universitat Politècnica de València, Spain, Marta Fernández-Diego Universitat Politècnica de València, Silvia Abrahão Universitat Politècnica de València, Emilio Insfran Universitat Politècnica de València, Spain
Pre-print
15:30 - 17:00
16:10
15m
Paper
Understanding Underrepresented Groups in Open Source Software
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Reydne Bruno dos Santos UFPE, Rafa Prado Federal University of Pernambuco, Ana Paula de Holanda Silva Federal University of Pernambuco, Kiev Gama Universidade Federal de Pernambuco, Fernando Castor University of Twente, Ronnie de Souza Santos University of Calgary
Pre-print
15:30 - 17:00
16:25
15m
Paper
NRevisit: A Cognitive Behavioral Metric for Code Understandability Assessment
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Hao Gao , Haytham Hijazi CISUC, DEI, University of Coimbra, Júlio Medeiros CISUC, DEI, University of Coimbra, João Durães CISUC, Polytechnic Institute of Coimbra, C.T. Lam Faculty of Applied Sciences, Macau Polytechnic University, Macau, China, Paulo Carvalho University of Coimbra, Henrique Madeira University of Coimbra
Pre-print

Fri 20 Jun

Displayed time zone: Athens change

15:30 - 17:00
15:30
15m
Paper
A Unified Semantic Framework for IoT-Healthcare Data Interoperability: A Graph-Based Machine Learning Approach Using RDF and R2RML
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Mehran Pourvahab University of Beira Interior, NOVA LINCS, Covilhã, Portugal, Anilson Monteiro University of Beira Interior, NOVA LINCS, Covilhã, Portugal, Sebastião Pais University of Beira Interior, NOVA LINCS, Covilhã, Portugal, Nuno Pombo University of Beira Interior & Instituto de Telecomunicaçōes, Covilhã, Portugal

Accepted Papers

Title
Actual Practices from Practitioners in Benefits Management in Digitalization Projects
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
A Unified Semantic Framework for IoT-Healthcare Data Interoperability: A Graph-Based Machine Learning Approach Using RDF and R2RML
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Exploring turnover, retention and growth in an OSS Ecosystem
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Pre-print
Integrating Human Feedback into a Reinforcement Learning-Based Framework for Adaptive User Interfaces
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Pre-print
NRevisit: A Cognitive Behavioral Metric for Code Understandability Assessment
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Pre-print
Understanding Underrepresented Groups in Open Source Software
Learnings/Reflections of Evaluation and Assessment projects in Software Engineering
Pre-print

Call for Papers

This track invites submissions focusing on the learnings drawn from projects in Software Engineering. This track emphasizes sharing insights gained through successful (or unsuccessful) challenging projects, highlighting experiences that led to new understandings or improvements in research methodologies and practices. We encourage contributions reflecting the learnings from both, successes by overcoming obstacles and unexpected results.

Topics of Interest

We welcome submissions exploring the evaluation and assessment projects in the following themes:

  • Software Development
  • Software Testing / Quality Assurance
  • Security
  • DevOps / DevSecOps
  • Empirical studies and data collection challenges
  • Large-scale and Distributed Systems
  • Artificial Intelligence
  • IoT and Blockchain
  • Human Factors and Cognitive Biases
  • Sustainability and Long-Term Maintenance
  • Ethical Considerations

Submission Guidelines

We invite submissions conforming to the following guidelines:

  • All submissions should follow a page limit of 10 pages, including all figures, tables, appendices, and the bibliography.
  • All submissions should be submitted in PDF format through EasyChair for EASE 2025.
  • All submissions should use the official ACM Primary Article Template. LaTeX users should use the following options:
\documentclass[sigconf,review,anonymous]{acmart}
\acmConference[EASE 2025]{The 29th International Conference on Evaluation and Assessment in Software Engineering}{17–20 June, 2025}{Istanbul, Türkiye}
  • The authors should comply with the SIGSOFT Open Science Policies.
  • We will employ a double-anonymous review process. The authors should not include their names or affiliations in submissions. Any online supplements, replication packages, etc., referred to in the work should also be anonymized.

Review Criteria:

All the submissions will be evaluated based on the following criteria:

  • Soundness: Rigor in research methods and reflection of challenges faced and learnings gained.
  • Significance: Potential impact in the corresponding domain and applicability of the lessons shared.
  • Novelty: Originality in addressing challenges or presenting new approaches to evaluation and assessment.
  • Verifiability and Transparency: Sufficient detail to support replication or independent verification.
  • Presentation: Clarity, organization, and adherence to formatting and language standards.

Conference Attendance Expectation

At least one author of each accepted paper must register and present. The proceedings will be published in the ACM digital library.

:
: