Technical TrackMODELS 2023
About
MODELS is the premier conference series for model-based software and systems engineering. Since 1998 MODELS has covered all aspects of modeling, from languages and methods to tools and applications. MODELS participants originate from a wide variety of backgrounds, including researchers, academics, engineers, and industry professionals.
MODELS 2023 is a forum for participants to share the latest research and practical experiences around modeling, modeling languages, and model-based software and systems engineering. Respective contributions advance the fundamentals of modeling and report applications of modeling in areas such as cyber-physical systems, embedded systems, socio-technical systems, cloud computing, big data, machine learning, security, open source, and sustainability.
Important Dates
Abstract Submission: April 07, 2023
Paper Submission: April 14, 2023
Author responses: June 5–7, 2023
Author notification: June 26, 2023
Camera Ready Due: July 10, 2023
Submission deadlines are hard, i.e., there will be no submission deadline extensions.
Topics of Interest
MODELS 2023 solicits submissions on a variety of topics related to modeling for software and systems engineering including, but not limited to:
- Fundamentals of model-based engineering, including the definition of syntax and semantics of modeling languages and model transformation languages.
- New paradigms, formalisms, applications, approaches, frameworks, or processes for model-based engineering such as low-code/no-code development, digital twins, etc.
- Definition, use, and analysis of model-based generative and re-engineering approaches.
- Model-based monitoring, analysis, and adaptation heading towards intelligent systems.
- Development of model-based systems engineering approaches and modeling-in-the-large, including interdisciplinary engineering and coordination.
- Applications of AI to model-related engineering problems, e.g., search-based and machine-learning approaches.
- Model-based engineering foundations for AI-based systems.
- Human and organizational factors in model-based engineering.
- Tools, meta-tools, and language workbenches for model-based engineering, including model management and scalable model repositories.
- Hybrid multi-modeling approaches, i.e., integration of various modeling languages and their tools.
- Evaluation and comparison of modeling languages, techniques, and tools.
- Quality assurance (analysis, testing, verification) for functional and non-functional properties of models and model transformations.
- Collaborative modeling to address team management issues, e.g., browser-based and cloud-enabled collaboration.
- Evolution of modeling languages and related standards.
- Modeling education, e.g., delivery methods and curriculum design.
- Modeling in software engineering, e.g., applications of models to address common software engineering challenges.
- Modeling for specific challenges such as collaboration, scalability, security, interoperability, adaptability, flexibility, maintainability, dependability, reuse, energy efficiency, sustainability, and uncertainty.
- Modeling with, and for, novel systems and paradigms in fields such as security, cyber-physical systems (CPSs), the Internet of Things, cloud computing, DevOps, blockchain technology, data analytics, data science, machine learning, Big Data, systems engineering, socio-technical systems, critical infrastructures and services, robotics, mobile applications, conversational agents, and open-source software.
- Empirical studies on the application of model-based engineering in areas such as smart manufacturing, smart cities, smart enterprises, smart mobility, smart society, etc.
As in previous years, MODELS 2023 offers two tracks for technical papers: the Foundations Track and the Practice Track. A detailed description of these tracks can be found on the Foundations Track and Practice Track pages respectively.
Wed 4 OctDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:00 - 10:30 | |||
10:30 - 12:00 | |||
10:30 22mTalk | OCL Rebuilt, From the Ground Up Technical Track Friedrich Steimann Fernuniversität in Hagen, Robert Clarisó Universitat Oberta de Catalunya, Martin Gogolla University of Bremen | ||
10:52 22mTalk | Applicability of Model Checking for Verifying Spacecraft Operational Designs Technical Track Philipp Chrszon , Paulina Maurer , George Saleip , Sascha Müller , Philipp M. Fischer , Andreas Gerndt German Aerospace Center (DLR), Michael Felderer German Aerospace Center (DLR) & University of Cologne | ||
11:15 22mTalk | An Experimental Evaluation of Conformance Testing Techniques in Active Automata Learning Technical Track | ||
11:37 22mTalk | Mutation Testing for Temporal Alloy Models Technical Track |
12:00 - 13:30 | |||
13:30 - 15:00 | |||
13:30 22mTalk | A Model-driven and Template-based Approach for Requirements Specification Technical Track Ikram Darif École de technologie supérieure (ÉTS), Cristiano Politowski Concordia University, Canada, Ghizlane El Boussaidi École de Technologie Supérieure, Imen Benzarti , Segla Kpodjedo Ecole de Technologie Superieure | ||
13:52 22mTalk | Rapid-Prototyping and Early Validation of Software Models through Uniform Integration of Hardware Technical Track | ||
14:15 22mTalk | Real-time collaborative multi-level modeling by conflict-free replicated data typesJ1ST Journal-first Link to publication DOI | ||
14:37 22mTalk | Multi-Dimensional Multi-Level ModelingJ1ST Journal-first Thomas Kuehne Victoria University of Wellington Link to publication DOI |
13:30 - 15:00 | |||
13:30 22mTalk | Model-Driven Prompt Engineering Technical Track Robert Clarisó Universitat Oberta de Catalunya, Jordi Cabot Luxembourg Institute of Science and Technology | ||
13:52 22mTalk | Leveraging modeling concepts and techniques to address challenges in network management Technical Track Nafiseh Kahani , Mojtaba Bagherzadeh , Reza Ahmadi , Juergen Dingel Queen's University, Kingston, Ontario | ||
14:15 22mTalk | Timing-Aware Software-in-the-Loop Simulation of Automotive Applications with FMI 3.0 Technical Track Srivathsan Ravi , Laura Beermann , Oliver Kotte , Paolo Pazzaglia , Mythreya Vinnakota , Dirk Ziegenbein Robert Bosch GmbH, Arne Hamann | ||
14:37 22mTalk | Reference architectures modelling and compliance checkingJ1ST Journal-first Alessio Bucaioni Mälardalen University, Amleto Di Salle European University of Rome, Ludovico Iovino Gran Sasso Science Institute, L'Aquila, Italy, Ivano Malavolta Vrije Universiteit Amsterdam, Patrizio Pelliccione Gran Sasso Science Institute, L'Aquila, Italy |
15:00 - 15:30 | |||
15:30 - 17:00 | |||
15:30 22mTalk | Uncertainty-aware consistency checking in industrial settings Technical Track | ||
15:52 22mTalk | Automatic Security-Flaw Detection - Replication and Comparison Technical Track | ||
16:15 22mTalk | An extended model-based characterization of fine-grained access control for SQL queries Technical Track | ||
16:37 22mTalk | A generic framework for representing and analyzing model concurrencyJ1ST Journal-first Steffen Zschaler King's College London, Erwan Bousse Nantes Université, Julien DeAntoni , Benoit Combemale University of Rennes, Inria, CNRS, IRISA Link to publication DOI |
Thu 5 OctDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:00 - 10:30 | |||
10:30 - 12:00 | |||
10:30 22mTalk | Advanced testing and debugging support for reactive executable DSLsJ1ST Journal-first Faezeh Khorram Huawei Technologies, Erwan Bousse Nantes Université, Jean-Marie Mottu IMT Atlantique; Nantes Université; École Centrale Nantes, Gerson Sunyé IMT Atlantique; Nantes Université; École Centrale Nantes | ||
10:52 22mTalk | Flexmi: a generic and modular textual syntax for domain-specific modellingJ1ST Journal-first DOI | ||
11:15 22mTalk | SimIMA: a virtual Simulink intelligent modeling assistant - Simulink intelligent modeling assistance through machine learning and model clonesJ1ST Journal-first | ||
11:37 22mTalk | Understanding the need for assistance in software modeling: interviews with expertsJ1ST Journal-first |
10:30 - 12:00 | |||
10:30 22mTalk | Introducing bigUML: A Flexible Open-Source GLSP-based Web Modeling Tool for UMLTool Demo Tools and Demonstrations | ||
10:52 25mTalk | Assembly Line: a tool for collaborative modeling of ontologies in public administrationTool Demo Tools and Demonstrations | ||
11:17 18mTalk | Engineering Low-code Modelling Environments with DandelionTool Demo Tools and Demonstrations Francisco Martínez-Lasaca Universidad Autónoma de Madrid, Pablo Díez , Esther Guerra Universidad Autónoma de Madrid, Juan de Lara Autonomous University of Madrid Pre-print | ||
11:35 22mTalk | UML Miner: a tool for mining UML diagramsTool Demo Tools and Demonstrations Pasquale Ardimento Università degli Studi di Bari, Lerina Aversano , Mario Luca Bernardi University of Sannio, Vito Alessandro Carella , Marta Cimitile , Michele Scalera DOI Media Attached |
10:30 - 12:00 | |||
10:30 22mTalk | Automated Grading of Use Cases Technical Track Omar Alam Trent University, Mohsen Hosseinibaghdadabadi , Nicolas Almerge , Jörg Kienzle McGill University, Canada | ||
10:52 22mTalk | Integrating Testing into the Alloy Model Development Workflow Technical Track Allison Sullivan The University of Texas at Arlington | ||
11:15 22mTalk | On Developing and Operating GLSP-based Web Modeling Tools: Lessons Learned from bigUML Technical Track | ||
11:37 23mTalk | Lessons Learned Building Tools for Workflow+ Technical Track Nicholas Annable , Richard Paige McMaster University, Mark Lawford McMaster University, Thomas Chiang , Alan Wassyng McMaster University, Canada |
12:00 - 13:30 | |||
13:30 - 15:00 | |||
13:30 22mTalk | Atlas: A Toolset for Efficient Model-Driven Data Exchange in Data SpacesTool Demo Tools and Demonstrations | ||
13:52 22mTalk | PyDaQu: Python Data Quality Code Generation based on DATTool Demo Tools and Demonstrations Media Attached | ||
14:15 22mTalk | ScoutSL: An Open-source Simulink Search EngineTool Demo Tools and Demonstrations Sohil Lal Shrestha The University of Texas at Arlington, Alexander Boll University of Bern, Timo Kehrer University of Bern, Christoph Csallner University of Texas at Arlington Pre-print Media Attached | ||
14:37 22mTalk | Demonstration of the DPMF Tool in Support of Data Protection by DesignTool Demo Tools and Demonstrations Laurens Sion imec-DistriNet, KU Leuven, Dimitri Van Landuyt KU Leuven, Belgium, Pierre Dewitte , Peggy Valcke , Wouter Joosen imec-DistriNet, KU Leuven |
15:00 - 15:30 | |||
15:30 - 17:00 | |||
15:30 22mTalk | Incremental Model Transformations with Triple Graph Grammars for Multi-version Models Technical Track Matthias Barkowsky Hasso Plattner Institute, University of Potsdam, Germany, Holger Giese Hasso Plattner Institute, University of Potsdam | ||
15:52 22mTalk | Variability-aware Neo4j for Analyzing a Graphical Model of a Software Product Line Technical Track | ||
16:15 22mTalk | How MetaEdit+ Supports Co-Evolution of Modeling Languages, Tools and ModelsTool Demo Tools and Demonstrations Pre-print | ||
16:37 22mTalk | Experience in Specializing a Generic Realization Language for SPL Engineering at Airbus Technical Track Damien Foures , Mathieu Acher Univ. Rennes 1, Inria, IRISA, Institut Universitaire de France (IUF), Olivier Barais University of Rennes, France / Inria, France / CNRS, France / IRISA, France, Benoit Combemale University of Rennes, Inria, CNRS, IRISA, Jean-Marc Jézéquel Univ Rennes - IRISA, Jörg Kienzle McGill University, Canada |
15:30 - 17:00 | |||
15:30 22mTalk | Word Embeddings for Model-Driven Engineering Technical Track José Antonio Hernández López Linkoping University, Carlos Durá , Jesús Sánchez Cuadrado Universidad de Murcia Pre-print Media Attached | ||
15:52 22mTalk | Automated Domain Modeling with Large Language Models: A Comparative Study Technical Track Kua Chen , Yujing Yang , Boqi Chen McGill University, José Antonio Hernández López Linkoping University, Gunter Mussbacher McGill University, Daniel Varro Linköping University / McGill University | ||
16:15 22mTalk | SkeMo: Sketch Modeling for Real-Time Model Component Generation Technical Track | ||
16:37 22mTalk | Toward a Symbiotic Approach Leveraging Generative AI for Model-Driven Engineering Technical Track Vinay Kulkarni Tata Consultancy Services Research, Sreedhar Reddy , Souvik Barat Tata Consultancy Services Research, Jaya Dutta |
Fri 6 OctDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:30 - 12:00 | |||
10:30 22mTalk | Gotten: A Model-driven Solution to Engineer Domain-specific Metamorphic Testing EnvironmentsTool Demo Tools and Demonstrations Pablo Gómez-Abajo Universidad Autónoma de Madrid, Pablo C Canizares Autonomous University of Madrid, Spain, Alberto Núňez University Complutense of Madrid, Spain., Esther Guerra Universidad Autónoma de Madrid, Juan de Lara Autonomous University of Madrid Pre-print | ||
10:52 22mTalk | MMT: Mutation Testing of Java Bytecode with Model TransformationTool Demo Tools and Demonstrations Christoph Bockisch Philipps-Universität Marburg, Daniel Neufeld , Gabriele Taentzer Philipps-Universität Marburg | ||
11:15 22mTalk | Business process modeling language selection for research modelersJ1ST Journal-first Siamak Farshidi Utrecht University, Izaak Beer Kwantes , Slinger Jansen Utrecht University, Netherlands | ||
11:37 22mTalk | Scientific Workflow Execution in the Cloud using a Dynamic Runtime ModelJ1ST Journal-first |
Accepted Papers
Foundations Track
About
We invite authors to submit high-quality papers describing significant, original, and unpublished results in the following categories:
1. Technical Papers
Technical papers should report on innovative research in modeling or model-driven engineering activities. They should describe a novel contribution to the field and carefully demonstrate the novelty by referencing relevant related literature.
Evaluation Criteria:
Technical papers will be evaluated based on originality, soundness, relevance, significance, strength of validation, quality of presentation, and quality of related work discussions. Submissions must clearly and explicitly describe what is novel about their contribution in comparison to prior work. Results must be validated by formal proofs, rigorous demonstrations (e.g., rigorous case studies or simulations), or empirical evaluations (e.g., controlled experiments or surveys). Authors are strongly encouraged to make the artifacts used for the evaluation publicly available, e.g., via a GitHub repository or an alternative that is expected to provide long-term availability. A respective artifact evaluation process is described below.
2. New Ideas and Vision Papers
New ideas and vision papers describe original, non-conventional research positions in modeling or model-driven engineering and/or approaches that deviate from standard practice. They describe well-defined revolutionary research ideas that are in the early stage of the investigation. They might provide evidence that common wisdom should be challenged, present unifying theories about existing modeling research that can provide new insights or lead to the development of new technologies or approaches, or apply modeling technology to unprecedented application areas.
Evaluation Criteria
New ideas and vision papers will be assessed primarily on their degree of originality and potential for advancing innovation in the field. Submissions must therefore clearly describe shortcomings of the state-of-the-art and the relevance, correctness, and impact of the idea/vision. New ideas and visions need not be fully worked out and a detailed roadmap need not be provided. Authors are strongly encouraged to make any artifacts publicly available, e.g., via a GitHub repository or an alternative that is expected to provide long-term availability. A respective artifact evaluation process is described below.
Artifact Evaluation
Authors of accepted papers will be invited to submit their accompanying artifacts (e.g., software, datasets, and proofs) to the Artifact Evaluation track to be evaluated by the Artifact Evaluation Committee. Participation in the Artifact Evaluation process is optional and does not affect paper acceptance. Submissions that successfully pass the Artifact Evaluation process will be awarded a seal of approval that will be attached to the papers.
Best Papers
Authors of selected conference papers will be invited to submit revised and extended versions for publication in the International Journal on Software and Systems Modeling (SoSyM). MODELS 2023 will furthermore award the very best submissions with “best paper” awards by ACM and Springer.
Submission process
The submission process for the MODELS 2023 Foundations Track follows a double-anonymous review process in which authors will not be identified to reviewers and reviewers will not be identified to authors. Thus, no submission may reveal the identity of its authors and authors must make every effort to comply with the double-anonymous review process. Please consult the submission information section below to prepare your manuscript for the double-anonymous process. Papers must be submitted electronically through the MODELS 2023 EasyChair web page.
- Technical papers must not exceed 10 pages for the main text, including all figures, tables, appendices, etc. Two more pages containing only references are permitted. Note that the page limit applies to the final, non-anonymous version; hence a submitted version cannot exhaust the page limit unless it uses blank space for any author information that was removed.
- New Ideas and Vision papers must not exceed 6 pages for the main text, including all figures, tables, appendices, etc. Two more pages containing only references are permitted. Note that the page limit applies to the final, non-anonymous version; hence a submitted version cannot exhaust the page limit unless it uses blank space for any author information that was removed.
- All submissions must be in PDF format. The page limit is strict; it will not be possible to purchase additional pages at any stage of the process.
- The word limit for abstracts is 250 words.
A double-anonymous review process will be used for the Foundations Track. Therefore, no submission may reveal the identity of the authors. Authors must make every effort to comply with the double-anonymous review process. In particular:
- Authors’ names must not be mentioned in the submission.
- All references to the author’s previous work should be in the third person.
- While authors have the right to upload preprints on ArXiV or similar sites, they should not indicate that the manuscript was submitted to MODELS 2023.
- If data is made available to the program committee (by uploading supplemental material or a link to a repository), this data must also not reveal the identity of the authors.
Submissions must conform to the IEEE formatting instructions:
- LaTeX users need to follow the IEEE LaTeX instructions and use the 8.5 x 11 2-column LaTeX Template; Overleaf users need to use the IEEE Conference Template. Note the information on how to use the LaTeX Bibliography Files
- Word users need to use the 8.5 x 11 2-column Word Template, and choose Times New Roman for the text, author information, and section headings, and Helvetica for the paper title.
- By submitting papers to the MODELS Foundations Track, authors acknowledge that they are aware of and acknowledge the ACM Policy and Procedures on Plagiarism and the IEEE Plagiarism FAQ. In particular, papers submitted to MODELS 2023 must not have been published elsewhere and must not be under review or submitted for review elsewhere while under consideration for MODELS 2023.
- Please note the IEEE Authors Rights and Responsibilities.
- Finally, IEEE requires the use of ORCIDs. LaTeX users should use the “orcidlink” package, “\hypersetup{pdfborder={0 0 0}}”, and “\orcidlink{XXXX-XXXX-XXXX-XXXX}” after each author name.
Please contact the Program Chairs if you have any questions about the submission process. Submissions that do not adhere to the stated page limits or violate the formatting guidelines will be desk-rejected without review. Accepted papers will be published in the conference proceedings published by IEEE.
Author Response Period
MODELS 2023 will offer an author response period for submissions that could potentially benefit from improvements, i.e., have reached a sufficient level of support and may potentially be accepted. In this period, authors may optionally consult the reviews and answer specific questions from the program committee that will inform the subsequent decision-making process.
Practice Track
About
The goal of the Practice Track is to bridge the gap between foundational research in Model-Based Engineering (MBE) and needs in practice. We invite authors to submit original contributions that report on the application of MBE solutions in the industry, the public sector, or open-source environments. Examples include:
- Demonstrations of scalable and cost-effective methodologies and tools.
- Case studies or field reports offering valuable insights.
- Comparisons of competing approaches in real-world scenarios.
- Submissions need to communicate the context of the application and the practical importance of the findings. Unlike the application itself, any reported lessons learned or insights gained must be original.
Evaluation Criteria
A paper in the Practice Track will be evaluated primarily on the potential impact of its findings. Specifically:
- The paper must describe the context of the MBE application and what problem it solves/addresses.
- The paper should include a concise explanation of the approaches, techniques, methodologies, and tools used.
- The paper should report on the efficacy of the application, ideally in comparison to alternatives, and/or what new lessons have been learned or insights have been gained.
- Studies that report negative results must include a thorough discussion of the possible causes of the failure and, ideally, provide a perspective on how to address them.
Authors are encouraged to make artifacts publicly available, e.g., via a GitHub repository or an alternative that is expected to provide long-term availability. A respective artifact evaluation process is described below.
Artifact Evaluation
Authors of accepted papers will be invited to submit their accompanying artifacts (e.g., software and datasets) to the Artifact Evaluation track to be evaluated by the Artifact Evaluation Committee. Participation in the Artifact Evaluation process is optional and does not affect paper acceptance. Submissions that successfully pass the Artifact Evaluation process will be awarded a seal of approval that will be attached to the papers.
Best Papers
Authors of selected conference papers will be invited to submit revised and extended versions for publication in the International Journal on Software and Systems Modeling (SoSyM). MODELS 2023 may furthermore recognize the very best Practice Track submissions with a “best paper” award.
Submission process
The submission process for the MODELS 2023 Practice Track follows a single-anonymous review process in which author names are identified to reviewers and do not need to be removed from the paper. Please consult the submission information section below to prepare your manuscript.
Papers must be submitted electronically through the MODELS 2023 EasyChair web page.
- Practice papers must not exceed 10 pages for the main text, including all figures, tables, appendices, etc. Two more pages containing only references are permitted.
- All submissions must be in PDF format. The page limit is strict; it will not be possible to purchase additional pages at any stage of the process.
- The word limit for abstracts is 250 words.
Submissions must conform to the IEEE formatting instructions:
- LaTeX users need to follow the IEEE LaTeX instructions and use the 8.5 x 11 2-column LaTeX Template; Overleaf users need to use the IEEE Conference Template. Note the information on how to use the LaTeX Bibliography Files
- Word users need to use the 8.5 x 11 2-column Word Template, and choose Times New Roman for the text, author information, and section headings, and Helvetica for the paper title.
- By submitting papers to the MODELS Foundations Track, authors acknowledge that they are aware of and acknowledge the ACM Policy and Procedures on Plagiarism and the IEEE Plagiarism FAQ. In particular, papers submitted to MODELS 2023 must not have been published elsewhere and must not be under review or submitted for review elsewhere while under consideration for MODELS 2023.
- Please note the IEEE Authors Rights and Responsibilities.
- Finally, IEEE requires the use of ORCIDs. LaTeX users should use the “orcidlink” package, “\hypersetup{pdfborder={0 0 0}}”, and “\orcidlink{XXXX-XXXX-XXXX-XXXX}” after each author name.
Please contact the Program Chairs if you have any questions about the submission process. Submissions that do not adhere to the stated page limits or violate the formatting guidelines will be desk-rejected without review. Accepted papers will be published in the conference proceedings published by IEEE.
Author Response Period
MODELS 2023 will offer an author response period for submissions that could potentially benefit from improvements, i.e., have reached a sufficient level of support and may potentially be accepted. In this period, authors may optionally consult the reviews and answer specific questions from the program committee that will inform the subsequent decision-making process.