Technical TrackMODELS 2024
About
MODELS is the premier conference series for model-based software and systems engineering. Since 1998 MODELS has covered all aspects of modeling, from languages and methods to tools and applications. MODELS participants originate from a wide variety of backgrounds, including researchers, academics, engineers, and industry professionals.
MODELS 2024 is a forum for participants to share the latest research and practical experiences around modeling, modeling languages, and model-based software and systems engineering. Respective contributions advance the fundamentals of modeling and report applications of modeling in areas such as cyber-physical systems, embedded systems, socio-technical systems, cloud computing, big data, machine learning, security, open source, and sustainability.
Important Dates
Abstract Submission: March 21, 2024
Paper Submission: March 28, 2024
Author responses: May 27–29, 2024
Author notification: June 17, 2024
Camera Ready Due: July 31, 2024
Submission deadlines are hard, i.e., there will be no submission deadline extensions.
Topics of Interest
MODELS 2024 solicits submissions on a variety of topics related to modeling for software and systems engineering including, but not limited to:
- Fundamentals of model-based engineering, including the definition of syntax and semantics of modeling languages and model transformation languages.
- New paradigms, formalisms, applications, approaches, frameworks, or processes for model-based engineering such as low-code/no-code development, digital twins, etc.
- Definition, use, and analysis of model-based generative and re-engineering approaches.
- Model-based monitoring, analysis, and adaptation heading towards intelligent systems.
- Development of model-based systems engineering approaches and modeling-in-the-large, including interdisciplinary engineering and coordination.
- Applications of AI to model-related engineering problems, e.g., approaches based on search, machine learning, large language models (AI for modeling)
- Model-based engineering foundations for AI-based systems (modeling for AI)
- Human and organizational factors in model-based engineering.
- Tools, meta-tools, and language workbenches for model-based engineering, including model management and scalable model repositories.
- Hybrid multi-modeling approaches, i.e., integration of various modeling languages and their tools.
- Evaluation and comparison of modeling languages, techniques, and tools.
- Quality assurance (analysis, testing, verification, fidelity assessment) for functional and non-functional properties of models and model transformations.
- Collaborative modeling to address team management issues, e.g., browser-based and cloud-enabled collaboration.
- Evolution of modeling languages and related standards.
- Modeling education, e.g., delivery methods and curriculum design.
- Modeling in software engineering, e.g., applications of models to address common software engineering challenges.
- Modeling for specific challenges such as collaboration, scalability, security, interoperability, adaptability, flexibility, maintainability, dependability, reuse, energy efficiency, sustainability, and uncertainty.
- Modeling with, and for, novel systems and paradigms in fields such as security, cyber-physical systems (CPSs), the Internet of Things, cloud computing, DevOps, blockchain technology, data analytics, data science, machine learning, Big Data, systems engineering, socio-technical systems, critical infrastructures and services, robotics, mobile applications, conversational agents, and open-source software.
- Empirical studies on the application of model-based engineering in areas such as smart manufacturing, smart cities, smart enterprises, smart mobility, smart society, etc.
As in previous years, MODELS 2024 offers two tracks for technical papers: the Foundations Track and the Practice Track. A detailed description of these tracks can be found on the Foundations Track and Practice Track pages, respectively.
Wed 25 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:15 - 10:45 | |||
10:15 30mCoffee break | Break Catering |
12:30 - 14:00 | |||
12:30 90mLunch | Lunch Catering |
15:45 - 16:15 | |||
15:45 30mCoffee break | Break Catering |
Thu 26 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:15 - 10:45 | |||
10:15 30mCoffee break | Break Catering |
10:45 - 12:30 | Model ManagementTechnical Track / Tools and Demonstrations at HS 7 Chair(s): Eugene Syriani Université de Montréal | ||
10:45 18mTalk | EditQL: A Textual Query Language for Evolving ModelsFT Technical Track Jakob Pietron Ulm University, Benedikt Jutz Karlsruhe Institute of Technology (KIT), Alexander Raschke Ulm University, Matthias Tichy Ulm University, Germany Link to publication DOI | ||
11:06 18mTalk | 10 years of Model Federation with Openflexo: Challenges and Lessons LearnedPT Technical Track Jean-Christophe Bach IMT Atlantique, Lab-STICC (UMR 6285), Antoine Beugnard , Joel Champeau , Fabien Dagnat IMT Atlantique, Lab-STICC (UMR 6285), Sylvain Guérin IMT Atlantique, Lab-STICC (UMR 6285), Salvador Martínez IMT Atlantique | ||
11:27 18mTalk | Give me some REST: A Controlled Experiment to Study Effects and Perception of Model-Driven Engineering with a Domain-Specific LanguagePT Technical Track Maximilian Schiedermeier Université du Québec à Montréal, Jörg Kienzle ITIS Software, University of Malaga, Bettina Kemme McGill University, Canada | ||
11:48 18mTalk | Enhancing Model Management with Automated REST API Generation Tools and Demonstrations Adiel Tuyishime Gran Sasso Science Institute, Francesco Basciani Gran Sasso Science Institute (GSSI), Javier Luis Cánovas Izquierdo IN3 - UOC, Ludovico Iovino Gran Sasso Science Institute, L'Aquila, Italy | ||
12:09 18mTalk | Keeping clients' models up-to-date with Edelta Tools and Demonstrations Lorenzo Bettini Dipartimento di Statistica, Informatica, Applicazioni ‘Giuseppe Parenti’ (DISIA), Davide Di Ruscio University of L'Aquila, Amleto Di Salle Gran Sasso Science Institute (GSSI), Ludovico Iovino Gran Sasso Science Institute, L'Aquila, Italy, Alfonso Pierantonio |
12:30 - 14:00 | |||
12:30 90mLunch | Lunch Catering |
14:00 - 15:15 | |||
14:00 60mPanel | AI for Modeling and Modeling for AI Panel P: Lola Burgueño University of Malaga, P: Jordi Cabot Luxembourg Institute of Science and Technology, P: Davide Di Ruscio University of L'Aquila, P: Daniel Varro Linköping University / McGill University, P: Mehrdad Sabetzadeh University of Ottawa | ||
15:00 15mTalk | MODELS 2025 Announcement Panel Marouane Kessentini Grand Valley State University |
15:15 - 15:45 | |||
15:15 30mCoffee break | Break Catering |
18:30 - 23:00 | |||
Fri 27 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:15 - 10:45 | |||
10:15 30mCoffee break | Break Catering |
10:45 - 12:30 | Modeling Languages and ToolsTools and Demonstrations at HS 7 Chair(s): Steffen Zschaler King's College London | ||
10:45 18mTalk | Modelling Tool Extension for Vulnerability Management Tools and Demonstrations Avi Shaked University of Oxford, UK, Nan Messe IRIT - University of Toulouse, Tom Melham University of Oxford | ||
11:11 18mTalk | SCCD Debugger: a Debugger for Statecharts and Class Diagrams Tools and Demonstrations Francisco Simões NOVA LINCS, Departamento de Informática, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa, Miguel Goulao NOVA-LINCS, FCT/UNL, Vasco Amaral NOVA LINCS & Nova School of Sciences and Tecnhology, Joeri Exelmans University of Antwerp, Hans Vangheluwe University of Antwerp and Flanders Make | ||
11:37 18mTalk | M2AR: A Web-based Modeling Environment for the Augmented Reality Workflow Modeling Language Tools and Demonstrations DOI | ||
12:03 18mTalk | Cross-IDE remote debugging of model management programs through the Debug Adapter Protocol Tools and Demonstrations |
12:30 - 13:30 | |||
12:30 60mLunch | Lunch Catering |
Accepted Papers
Foundations Track
About
We invite authors to submit high-quality papers describing significant, original, and unpublished results in the following categories:
1. Technical Papers
Technical papers should report on innovative research in modeling or model-driven engineering activities. They should describe a novel contribution to the field and carefully demonstrate the novelty by referencing relevant related literature.
Evaluation Criteria
Technical papers will be evaluated based on originality, soundness, relevance, significance, strength of validation, quality of presentation, and quality of related work discussions. Submissions must clearly and explicitly describe what is novel about their contribution in comparison to prior work. Results must be validated by formal proofs, rigorous demonstrations (e.g., rigorous case studies or simulations), or empirical evaluations (e.g., controlled experiments or surveys). Authors are strongly encouraged to make the artifacts used for the evaluation publicly available, e.g., via a GitHub repository or an alternative that is expected to provide long-term availability. A respective artifact evaluation process is described below.
2. New Ideas and Vision Papers
New ideas and vision papers describe original, non-conventional research positions in modeling or model-driven engineering and/or approaches that deviate from standard practice. They describe well-defined revolutionary research ideas that are in the early stage of the investigation. They might provide evidence that common wisdom should be challenged, present unifying theories about existing modeling research that can provide new insights or lead to the development of new technologies or approaches, or apply modeling technology to unprecedented application areas.
Evaluation Criteria
New ideas and vision papers are either short or long papers. Both will be assessed primarily on their degree of originality and potential for advancing innovation in the field. As such, new ideas and vision papers are expected to follow a specific format, and provide a compelling and revolutionary argument. Note that this category is not intended for foundation or practice papers without sufficient evaluation. Such papers will not be accepted. Submissions must clearly describe shortcomings of the state-of-the-art and the relevance, correctness, and impact of the idea/vision. New ideas and vision papers need not be fully worked out and a detailed roadmap need not be provided. The use of worked-out examples to support new ideas is strongly encouraged. Long papers must also supply some degree of validation. However, we accept less rigorous methods of validation such as compelling arguments, exploratory implementations, and substantial examples. Authors are also strongly encouraged to make any artifacts publicly available, e.g., via a GitHub repository or an alternative that is expected to provide long-term availability. A respective artifact evaluation process is described below.
Artifact Evaluation
Authors of accepted papers will be invited to submit their accompanying artifacts (e.g., software, datasets, and proofs) to the Artifact Evaluation track to be evaluated by the Artifact Evaluation Committee. Participation in the Artifact Evaluation process is optional and does not affect paper acceptance. Submissions that successfully pass the Artifact Evaluation process will be awarded a seal of approval that will be attached to the papers.
Best Papers
Authors of selected conference papers will be invited to submit revised and extended versions for publication in the International Journal on Software and Systems Modeling (SoSyM). MODELS 2024 will furthermore award the very best submissions with “best paper” awards by ACM and Springer.
Submission process
The submission process for the MODELS 2024 Foundations Track follows a double-anonymous review process in which authors will not be identified to reviewers and reviewers will not be identified to authors. Thus, no submission may reveal the identity of its authors and authors must make every effort to comply with the double-anonymous review process. Please consult the submission information section below to prepare your manuscript for the double-anonymous process. Papers must be submitted electronically through the MODELS 2024 EasyChair web page.
- Technical papers must not exceed 10 pages for the main text, including all figures, tables, appendices, etc. Two more pages containing only references are permitted. Note that the page limit applies to the final, non-anonymous version; hence a submitted version cannot exhaust the page limit unless it uses blank space for any author information that was removed.
- New Ideas and Vision papers must not exceed for the main text either 6 pages for short papers or 10 pages for long paper, including all figures, tables, appendices, etc. Two more pages containing only references are permitted. Note that the page limit applies to the final, non-anonymous version; hence a submitted version cannot exhaust the page limit unless it uses blank space for any author information that was removed.
- All submissions must be in PDF format. The page limit is strict; it will not be possible to purchase additional pages at any stage of the process.
A double-anonymous review process will be used for the Foundations Track. Therefore, no submission may reveal the identity of the authors. Authors must make every effort to comply with the double-anonymous review process. In particular:
- Authors’ names must not be mentioned in the submission.
- All references to the author’s previous work should be in the third person.
- While authors have the right to upload preprints on ArXiV or similar sites, they should not indicate that the manuscript was submitted to MODELS 2024.
- If data is made available to the program committee (by uploading supplemental material or a link to a repository), this data must also not reveal the identity of the authors.
- Formatting instructions are available at https://www.acm.org/publications/proceedings-template for both LaTeX and Word users. LaTeX users must use the provided acmart.cls and ACM-Reference-Format.bst without modification, enable the conference format in the preamble of the document (i.e., \documentclass[sigconf,review]{acmart}), and use the ACM reference format for the bibliography (i.e., \bibliographystyle{ACM-Reference-Format}). The review option adds line numbers, thereby allowing referees to refer to specific lines in their comments.
- By submitting to the MODELS Foundations Track, authors acknowledge that they are aware of and agree to be bound by the ACM Policy and Procedures on Plagiarism. In particular, papers submitted to MODELS 2024 must not have been published elsewhere and must not be under review or submitted for review elsewhere while under consideration for MODELS 2024.
Submissions that do not adhere to the stated page limits or violate the formatting guidelines will be desk-rejected without review. Accepted papers will be published in the conference proceedings published by ACM.
By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.”
Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. The collection process has started and will roll out as a requirement throughout 2022. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.
Please contact the Program Chairs if you have any questions about the submission process.
Author Response Period
MODELS 2024 will offer an author response period for submissions that could potentially benefit from improvements, i.e., have reached a sufficient level of support and may potentially be accepted. In this period, authors may optionally consult the reviews and answer specific questions from the program committee that will inform the subsequent decision-making process.
Practice Track
About
The goal of the Practice Track is to bridge the gap between foundational research in Model-Based Engineering (MBE) and needs in practice. We invite authors to submit original contributions that report on the application of MBE solutions in the industry, the public sector, or open-source environments. Examples include:
- Demonstrations of scalable and cost-effective methodologies and tools.
- Case studies or field reports offering valuable insights.
- Comparisons of competing approaches in real-world scenarios.
- Submissions need to communicate the context of the application and the practical importance of the findings. Unlike the application itself, any reported lessons learned or insights gained must be original.
Evaluation Criteria
A paper in the Practice Track will be evaluated primarily on the potential impact of its findings. Specifically:
- The paper must describe the context of the MBE application and what problem it solves/addresses.
- The paper should include a concise explanation of the approaches, techniques, methodologies, and tools used.
- The paper should report on the efficacy of the application, ideally in comparison to alternatives, and/or what new lessons have been learned or insights have been gained.
- Studies that report negative results must include a thorough discussion of the possible causes of the failure and, ideally, provide a perspective on how to address them.
Authors are encouraged to make artifacts publicly available, e.g., via a GitHub repository or an alternative that is expected to provide long-term availability. A respective artifact evaluation process is described below.
Artifact Evaluation
Authors of accepted papers will be invited to submit their accompanying artifacts (e.g., software and datasets) to the Artifact Evaluation track to be evaluated by the Artifact Evaluation Committee. Participation in the Artifact Evaluation process is optional and does not affect paper acceptance. Submissions that successfully pass the Artifact Evaluation process will be awarded a seal of approval that will be attached to the papers.
Best Papers
Authors of selected conference papers will be invited to submit revised and extended versions for publication in the International Journal on Software and Systems Modeling (SoSyM). MODELS 2024 may furthermore recognize the very best Practice Track submissions with a “best paper” award.
Submission process
The submission process for the MODELS 2024 Practice Track follows a single-anonymous review process in which author names are identified to reviewers and do not need to be removed from the paper. Please consult the submission information section below to prepare your manuscript.
Papers must be submitted electronically through the MODELS 2024 EasyChair web page.
- Practice papers must not exceed 10 pages for the main text, including all figures, tables, appendices, etc. Two more pages containing only references are permitted.
- All submissions must be in PDF format. The page limit is strict; it will not be possible to purchase additional pages at any stage of the process.
Submissions must conform to the ACM formatting instructions:
- Formatting instructions are available at https://www.acm.org/publications/proceedings-template for both LaTeX and Word users. LaTeX users must use the provided acmart.cls and ACM-Reference-Format.bst without modification, enable the conference format in the preamble of the document (i.e., \documentclass[sigconf,review]{acmart}), and use the ACM reference format for the bibliography (i.e., \bibliographystyle{ACM-Reference-Format}). The review option adds line numbers, thereby allowing referees to refer to specific lines in their comments.
- By submitting to the MODELS Practice Track, authors acknowledge that they are aware of and agree to be bound by the ACM Policy and Procedures on Plagiarism. In particular, papers submitted to MODELS 2024 must not have been published elsewhere and must not be under review or submitted for review elsewhere while under consideration for MODELS 2024.
Submissions that do not adhere to the stated page limits or violate the formatting guidelines will be desk-rejected without review. Accepted papers will be published in the conference proceedings published by ACM.
By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.”
Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. The collection process has started and will roll out as a requirement throughout 2022. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.
Please contact the Program Chairs if you have any questions about the submission process.
Author Response Period
MODELS 2024 will offer an author response period for submissions that could potentially benefit from improvements, i.e., have reached a sufficient level of support and may potentially be accepted. In this period, authors may optionally consult the reviews and answer specific questions from the program committee that will inform the subsequent decision-making process.