Learnings/Reflections of Evaluation and Assessment projects in Software EngineeringEASE 2025
Wed 18 JunDisplayed time zone: Athens change
11:00 - 12:30 | Human Factors in Software EngineeringResearch Papers / Industry Papers / Learnings/Reflections of Evaluation and Assessment projects in Software Engineering at Glass Room Chair(s): Viktoria Stray University of Oslo / SINTEF | ||
11:15 15mPaper | Actual Practices from Practitioners in Benefits Management in Digitalization Projects Learnings/Reflections of Evaluation and Assessment projects in Software Engineering J. David Patón-Romero Simula Metropolitan Center for Digital Engineering, Bertha J. Ngereja Simula Metropolitan Center for Digital Engineering (SimulaMet), Jo Hannay Simula Research Laboratory, Magne Jørgensen Simula Metropolitan Center for Digital Engineering |
13:30 - 15:00 | Human Factors in Software EngineeringLearnings/Reflections of Evaluation and Assessment projects in Software Engineering / Research Papers / Industry Papers at Glass Room Chair(s): Viktoria Stray University of Oslo / SINTEF | ||
13:30 15mPaper | Exploring turnover, retention and growth in an OSS Ecosystem Learnings/Reflections of Evaluation and Assessment projects in Software Engineering Tien Rahayu Tulili University of Groningen, Ayushi Rastogi University of Groningen, The Netherlands, Andrea Capiluppi University of Groningen Pre-print | ||
14:15 15mPaper | Integrating Human Feedback into a Reinforcement Learning-Based Framework for Adaptive User Interfaces Learnings/Reflections of Evaluation and Assessment projects in Software Engineering Daniel Gaspar Figueiredo Universitat Politècnica de València, Spain, Marta Fernández-Diego Universitat Politècnica de València, Silvia Abrahão Universitat Politècnica de València, Emilio Insfran Universitat Politècnica de València, Spain Pre-print |
15:30 - 17:00 | Human Factors in Software EngineeringResearch Papers / Industry Papers / Learnings/Reflections of Evaluation and Assessment projects in Software Engineering at Glass Room Chair(s): Eray Tüzün Bilkent University | ||
16:10 15mPaper | Understanding Underrepresented Groups in Open Source Software Learnings/Reflections of Evaluation and Assessment projects in Software Engineering Reydne Bruno dos Santos UFPE, Rafa Prado Federal University of Pernambuco, Ana Paula de Holanda Silva Federal University of Pernambuco, Kiev Gama Universidade Federal de Pernambuco, Fernando Castor University of Twente, Ronnie de Souza Santos University of Calgary Pre-print |
15:30 - 17:00 | VV&TShort Papers, Emerging Results / Industry Papers / Learnings/Reflections of Evaluation and Assessment projects in Software Engineering / Research Papers at Workshop Room Chair(s): Ivan Machado Federal University of Bahia - UFBA | ||
16:25 15mPaper | NRevisit: A Cognitive Behavioral Metric for Code Understandability Assessment Learnings/Reflections of Evaluation and Assessment projects in Software Engineering Hao Gao , Haytham Hijazi CISUC, DEI, University of Coimbra, Júlio Medeiros CISUC, DEI, University of Coimbra, João Durães CISUC, Polytechnic Institute of Coimbra, C.T. Lam Faculty of Applied Sciences, Macau Polytechnic University, Macau, China, Paulo Carvalho University of Coimbra, Henrique Madeira University of Coimbra Pre-print |
Fri 20 JunDisplayed time zone: Athens change
15:30 - 17:00 | Model/DataAI Models / Data / Learnings/Reflections of Evaluation and Assessment projects in Software Engineering / Research Papers at Senate Hall Chair(s): Giusy Annunziata University of Salerno | ||
15:30 15mPaper | A Unified Semantic Framework for IoT-Healthcare Data Interoperability: A Graph-Based Machine Learning Approach Using RDF and R2RML Learnings/Reflections of Evaluation and Assessment projects in Software Engineering Mehran Pourvahab University of Beira Interior, NOVA LINCS, Covilhã, Portugal, Anilson Monteiro University of Beira Interior, NOVA LINCS, Covilhã, Portugal, Sebastião Pais University of Beira Interior, NOVA LINCS, Covilhã, Portugal, Nuno Pombo University of Beira Interior & Instituto de Telecomunicaçōes, Covilhã, Portugal |
Accepted Papers
Call for Papers
This track invites submissions focusing on the learnings drawn from projects in Software Engineering. This track emphasizes sharing insights gained through successful (or unsuccessful) challenging projects, highlighting experiences that led to new understandings or improvements in research methodologies and practices. We encourage contributions reflecting the learnings from both, successes by overcoming obstacles and unexpected results.
Topics of Interest
We welcome submissions exploring the evaluation and assessment projects in the following themes:
- Software Development
- Software Testing / Quality Assurance
- Security
- DevOps / DevSecOps
- Empirical studies and data collection challenges
- Large-scale and Distributed Systems
- Artificial Intelligence
- IoT and Blockchain
- Human Factors and Cognitive Biases
- Sustainability and Long-Term Maintenance
- Ethical Considerations
Submission Guidelines
We invite submissions conforming to the following guidelines:
- All submissions should follow a page limit of 10 pages, including all figures, tables, appendices, and the bibliography.
- All submissions should be submitted in PDF format through EasyChair for EASE 2025.
- All submissions should use the official ACM Primary Article Template. LaTeX users should use the following options:
\documentclass[sigconf,review,anonymous]{acmart}
\acmConference[EASE 2025]{The 29th International Conference on Evaluation and Assessment in Software Engineering}{17–20 June, 2025}{Istanbul, Türkiye}
- The authors should comply with the SIGSOFT Open Science Policies.
- We will employ a double-anonymous review process. The authors should not include their names or affiliations in submissions. Any online supplements, replication packages, etc., referred to in the work should also be anonymized.
Review Criteria:
All the submissions will be evaluated based on the following criteria:
- Soundness: Rigor in research methods and reflection of challenges faced and learnings gained.
- Significance: Potential impact in the corresponding domain and applicability of the lessons shared.
- Novelty: Originality in addressing challenges or presenting new approaches to evaluation and assessment.
- Verifiability and Transparency: Sufficient detail to support replication or independent verification.
- Presentation: Clarity, organization, and adherence to formatting and language standards.
Conference Attendance Expectation
At least one author of each accepted paper must register and present. The proceedings will be published in the ACM digital library.