Using Benji to Systematically Evaluate Model Comparison AlgorithmsDemo
Model comparison is a critical task in model-driven engineering. Its correctness enables an effective management of model evolution, synchronisation, and even other tasks, such as model transformation testing. The literature is rich as concerns comparison algorithms approaches, however the same cannot be said for their systematic evaluation. In this paper we present Benji, a tool for the generation of model comparison benchmarks. In particular, Benji provides domain-specific languages to design experiments in terms of input models and possible manipulations, and based on those generates corresponding benchmark cases. In this way, the experiment specification can be exploited as a systematic way to evaluate available comparison algorithms against the problem under study.
Fri 23 OctDisplayed time zone: Eastern Time (US & Canada) change
15:00 - 16:15 | |||
15:00 20mFull-paper | Variability Representations in Class Models: An Empirical AssessmentFT Technical Track Daniel Strüber Radboud University Nijmegen, Anthony Anjorin , Thorsten Berger Chalmers University of Technology, Sweden / University of Gothenburg, Sweden Pre-print | ||
15:20 20mFull-paper | Co-evolution of Simulink Models in a Model-Based Product LineP&I Technical Track Robbert Jongeling Malardalen University, Antonio Cicchetti Mälardalen University, Federico Ciccozzi Malardalen University, Jan Carlson Malardalen University Link to publication DOI Pre-print | ||
15:40 15mTalk | Claimed Advantages and Disadvantages of (dedicated) Model Transformation languages: A Systematic Literature ReviewJ1st Technical Track | ||
15:55 15mDemonstration | Using Benji to Systematically Evaluate Model Comparison AlgorithmsDemo Technical Track |