SLE 2026
Thu 2 - Fri 3 July 2026
co-located with STAF 2026

The ACM SIGPLAN International Conference on Software Language Engineering (SLE) is devoted to the principles of software languages: their design, their implementation, and their evolution.

With the ubiquity of computers, software has become the dominating intellectual asset of our time. In turn, this software depends on software languages, namely the languages it is written in, the languages used to describe its environment, and the languages driving its development process. Given that everything depends on software and that software depends on software languages, it seems fair to say that for many years to come, everything will depend on software languages.

Software language engineering (SLE) is the discipline of engineering languages and their tools required for the creation of software. It abstracts from the differences between programming languages, modelling languages, and other software languages, and emphasises the engineering facet of the creation of such languages, that is, the establishment of the scientific methods and practices that enable the best results. While SLE is certainly driven by its metacircular character (software languages are engineered using software languages), SLE is not self-satisfying: its scope extends to the engineering of languages for all and everything.

Like its predecessors, the 19th edition of the SLE conference, SLE 2026, will bring together researchers from different areas united by their common interest in the creation, capture, and tooling of software languages. It overlaps with traditional conferences on the design and implementation of programming languages, model-driven engineering, and compiler construction, and emphasises the fusion of their communities. To foster the latter, SLE traditionally fills a two-day program with a single track, with the only temporal overlap occurring between co-located events.

SLE 2026 will be co-located with STAF 2026 and take place in Rennes, France.

Call for papers


Topics of Interest

SLE covers software language engineering in general, rather than engineering a specific software language. Topics of interest include, but are not limited to:

  • Software Language Design and Implementation
    • Approaches to and methods for language design
    • Static semantics (e.g., design rules, well-formedness constraints)
    • Techniques for specifying behavioral/executable semantics
    • Generative approaches (incl. code synthesis, compilation)
    • Meta-languages, meta-tools, language workbenches
    • AI-assisted language design and optimisation
  • Software Language Quality
    • Verification and formal methods for languages
    • Testing techniques for languages
    • Simulation techniques for languages
    • Model-based testing
    • AI-assisted validation
  • Software Language Integration and Composition
    • Coordination of heterogeneous languages and tools
    • Mappings between languages (incl. transformation languages)
    • Traceability between languages
    • Deployment of languages to different platforms
    • (AI-assisted) Language refactorings
  • Software Language Maintenance
    • Software language reuse
    • Language evolution
    • Language families and variability, language and software product lines
  • Domain-specific approaches for any aspects of SLE (design, implementation, validation, maintenance)
  • Empirical evaluation and experience reports of language engineering tools
    • User studies evaluating usability
    • Performance benchmarks
    • Industrial applications
  • Synergies between Language Engineering and emerging/promising research areas
    • Generative AI in language engineering (e.g., AI-based language modelling, AI-driven code generation tools)
    • Language engineering for AI. AI and ML language engineering (e.g., ML compiler testing, code classification, DSLs for AI processes and tasks…)
    • Quantum language engineering (e.g., language design for quantum machines)
    • Language engineering for physical systems (e.g., CPS, IoT, digital twins)
    • Socio-technical systems and language engineering (e.g., language evolution to adapt to social requirements)

Types of Submissions

SLE accepts the following types of papers:

  • Research papers: These are “traditional” papers detailing research contributions to SLE. Papers may range from 6 to 12 pages in length and may optionally include 2 further pages of bibliography/appendices. Papers will be reviewed with an understanding that some results do not need 12 full pages and may be fully described in fewer pages.

  • New ideas/vision papers: These papers may describe new, unconventional software language engineering research positions or approaches that depart from standard practice. They can describe well-defined research ideas that are at an early stage of investigation. They could also provide new evidence to challenge common wisdom, present new unifying theories about existing SLE research that provides novel insight or that can lead to the development of new technologies or approaches, or apply SLE technology to radically new application areas. New ideas/vision papers must not exceed 5 pages and may optionally include 1 further page of bibliography/appendices.

  • SLE Body of Knowledge: The SLE Body of Knowledge (SLEBoK) is a community-wide effort to provide a unique and comprehensive description of the concepts, best practices, tools, and methods developed by the SLE community. In this respect, the SLE conference will accept surveys, essays, open challenges, empirical observations, and case study papers on the SLE topics. These can focus on, but are not limited to, methods, techniques, best practices, and teaching approaches. Papers in this category can have up to 20 pages, including bibliography/appendices.

  • Tool papers: These papers focus on the tooling aspects often forgotten or neglected in research papers. A good tool paper focuses on practical insights that will likely be useful to other implementers or users in the future. Any of the SLE topics of interest are appropriate areas for tool papers. Submissions must not exceed 5 pages and may optionally include 1 further page of bibliography/appendices. They may optionally include an appendix with a demo outline/screenshots and/or a short video/screencast illustrating the tool.

Workshops: Workshops will be organised by STAF. Please inform us and contact STAF 2026 organisers if you would like to organise a workshop of interest to the SLE audience. Information on how to submit workshops can be found on the STAF 2026 Website.


Submission

SLE 2026 has a single submission round for papers, including a mandatory abstract registration.

Authors of accepted research papers will be invited to submit artefacts.


Format

Submissions have to use the ACM SIGPLAN Conference Format “acmart” (https://sigplan.org/Resources/Author/#acmart-format); please make sure that you always use the latest ACM SIGPLAN acmart LaTeX template, and that the document class definition is \documentclass[sigplan,anonymous,review]{acmart}. Do not make any changes to this format!

Ensure that your submission is legible when printed on a black and white printer. In particular, please check that colours remain distinct and font sizes in figures and tables are legible.

To increase fairness in reviewing, a double-blind review process has become standard across SIGPLAN conferences. Accordingly, SLE will follow the double-blind process. Author names and institutions must be omitted from submitted papers, and references to the authors’ own related work should be in the third person. No other changes are necessary, and authors will not be penalized if reviewers are able to infer their identities in implicit ways.

All submissions must be in PDF format. You can access the submission site from the conference website: https://conf.researchr.org/home/sle-2026


Concurrent Submissions

Papers must describe unpublished work that is not currently submitted for publication elsewhere as described by SIGPLAN’s Republication Policy (https://www.sigplan.org/Resources/Policies/Republication/). Submitters should also be aware of ACM’s Policy and Procedures on Plagiarism (https://www.acm.org/publications/policies/plagiarism-overview). Submissions that violate these policies will be desk-rejected.


Policy on Human Participant and Subject Research

Authors conducting research involving human participants and subjects must ensure that their research complies with their local governing laws and regulations and the ACM’s general principles, as stated in the ACM’s Publications Policy on Research Involving Human Participants and Subjects (https://www.acm.org/publications/policies/research-involving-human-participants-and-subjects). If submissions are found to be violating this policy, they will be rejected.


Reviewing Process

All submitted papers will be reviewed by at least three members of the program committee. Research papers and tool papers will be evaluated concerning soundness, relevance, novelty, presentation, validation, and replicability. New ideas/vision papers will be evaluated primarily concerning soundness, relevance, novelty, and presentation. Tool papers will be evaluated concerning relevance, presentation, and replicability.

For fairness reasons, all submitted papers must conform to the above instructions. Submissions that violate these instructions may be rejected without review at the discretion of the PC chairs.


Artefact Evaluation

SLE will use an evaluation process to assess the quality of artefacts on which papers are based to foster the culture of experimental reproducibility. Authors of accepted research papers are invited to submit artefacts.


Awards

  • Distinguished paper: Award for the most notable paper, as determined by the PC chairs based on the recommendations of the program committee.
  • Distinguished artefact: Award for the artefact most significantly exceeding expectations, as determined by the AEC chairs based on the recommendations of the artefact evaluation committee.
  • Distinguished reviewer: Award for the programme committee member that produced the most useful reviews as assessed by paper authors.
  • Most Influential Paper: Award for the SLE 2016 paper with the greatest impact, as judged by the SLE Steering Committee.

Publication

All accepted papers will be published in the ACM Digital Library.

AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

Notes from the ACM:

  • By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.
  • Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

ACMs new open access publishing model for 2026

Important update on ACMs new open access publishing model for 2026 ACM Conferences.

Starting January 1, 2026, ACM will fully transition to Open Access. All ACM publications, including those from ACM-sponsored conferences, will be 100% Open Access. Authors will have two primary options for publishing Open Access articles with ACM: the ACM Open institutional model or by paying Article Processing Charges (APCs). With over 2,600 institutions already part of ACM Open, the majority of ACM-sponsored conference papers will not require APCs from authors or conferences (currently, around 76%).

Authors from institutions not participating in ACM Open will need to pay an APC to publish their papers, unless they qualify for a financial waiver. To find out whether an APC applies to your article, please consult the list of participating institutions in ACM Open and review the https://www.acm.org/publications/policies/policy-on-discretionary-open-access-apc-waivers. Keep in mind that waivers are rare and are granted based on specific criteria set by ACM.

Understanding that this change could present financial challenges, ACM has approved a temporary subsidy for 2026 to ease the transition and allow more time for institutions to join ACM Open. The subsidy will offer:

$250 APC for ACM/SIG members
$350 for non-members

This represents a 65% discount, funded directly by ACM. Authors are encouraged to help advocate for their institutions to join ACM Open during this transition period.

This temporary subsidized pricing will apply to all conferences scheduled for 2026.


Organisation

  • General chair: Arnaud Blouin, Univ Rennes, INSA Rennes, Inria, CNRS, IRISA
  • PC co-chair: Jordi Cabot, Luxembourg Institute of Science and Technology
  • PC co-chair: Shigeru Chiba, University of Tokyo, Japan

Contact

For additional information, clarification, or answers to any questions, please get in touch with the program co-chairs (jordi.cabot@list.lu and chiba@g.ecc.u-tokyo.ac.jp).

Cédric Brun

CEO of OBEO

We built the languages. That was the easy part.

For over 20 years, the Software Language Engineering community has pushed languages forward. More expressive. More formal. Better tooled. Metamodeling frameworks, transformations, graphical and textual notations, language workbenches. At Obeo, we have contributed to this journey through the Eclipse ecosystem with technologies such as EMF, Acceleo, Sirius, and Capella, deployed in large-scale industrial contexts.

And it worked. Technically.

But when these languages meet real organizations, a different reality emerges. Adoption is hard. Collaboration is harder. Usability becomes the bottleneck. In practice, we have too often confused expressivity with usability.

This keynote reflects on two decades of building and deploying modeling languages in the real world. The conclusion is simple. The problem was never language itself. It was everything around it: user experience, integration into workflows, and the ability for teams to share and evolve a language over time.

The shift to web-based platforms such as Sirius Web and SysON changes the game. Not just new technology. A new constraint. Languages must be collaborative. Accessible. Alive. No longer reserved for experts.

The future of modeling is not less language. It is more language - better tooled, more collaborative, and closer to the people who need it.

Artifact Evaluation

The SLE’26 review process also evaluates the quality of the artifacts supporting accepted research papers, through the Artifact Evaluation track.

Authors of research and tools paper accepted for SLE 2026 will be invited to submit artifacts. In the context of the SLE community, an artifact refers to any digital object that supports, complements, or is a result of research in the field of software language engineering. This includes, but it is not limited to, tools, language grammars, metamodels, codebases, transformation scripts, formal proofs, benchmarks, datasets, statistical analyses, and surveys.

The submitted artifacts will be reviewed by a dedicated Artifact Evaluation Committee. The approved artifacts will then be made first-class bibliographic objects, easy to find and cite. Depending on the quality of the artifact, the artifact might be awarded with different kinds of “badges” that are visible on the final paper.

The submission is optional and it is additional to your already accepted paper at SLE’26. It will not have a negative impact.

Artifacts provide tangible evidence of results, enable reproducibility, and encourage reuse and extension by the community.

Artifact Review Process

Submitted artifacts will go through a two-phase evaluation.

  1. Kick-the-tires:
    Reviewers check the artifact integrity and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.). Authors are informed of the outcome and will be given a 5-day period to read and respond to the kick-the-tires reports of their artifacts. During the author response period, interactive discussions between reviewers and authors will be possible through HotCRP.

  2. Artifact assessment:
    Reviewers evaluate the artifacts, checking if they live up to the claims the authors make in the accompanying documentation.

Artifact Preparation Guidelines

At a high level, we are interested in artifacts that:

  • Have no dependencies. Use of docker images is strongly recommended. Virtual machine images in OVF/OVA format containing the artifact can also be provided.
  • Have a minimal number of setup steps. Ideally, it should just be importing the docker/VM image.
  • Have a short run, so that reviewers can try first before carrying the full review (kick-the-tire).
  • Have a push-button evaluation. Ideally, the evaluation can be run through a single script, which performs the computation and generates the relevant figures/experimental data presented in the paper. The evaluation should either display progress messages or expected duration should be provided. This fully automated approach may be a bit more costly to set up, but you won’t have any copy/pasting issues for your paper, and regenerating data is heavily simplified.
  • Include some documentation on the code and layout of the artifact.
  • Use widely supported open formats for documents, preferably CSV or JSON for data.
  • Document which outputs are associated with which parts of your paper, if possible, please specify table, figure or sub-sections.

The artifact evaluated by the AEC and linked in the paper must be precisely the same.

AEC Chairs will assure that DOIs point to the specific version evaluated. To create a DOI, you can use platforms like Zenodo, FigShare or OSF, which offer free DOI creation.

PDF and artifact should NOT be anonymized anymore.

Authors are strongly discouraged from:

  • Downloading content over the internet during experiments or tests;
  • Using closed-source software libraries, frameworks, operating systems, and container formats; and
  • Providing experiments or tests that run for multiple days. If the artifact takes several days to run, we ask that you provide us with the full artifact and a reduced input set (in addition to the full set) to only partially reproduce your results in a shorter time. If the artifact requires special hardware, please get in touch with the AEC chairs, let us know of the issue, and provide us with (preferably SSH) access to a self-hosted platform for accessing the artifact.

Artifact Submission Guidelines

Every submission must include the following.
Authors must submit a single artifact for a paper (1-to-1 mapping, paper-to-artifact).

  • A DOI for downloading the artifact.
  • A PDF version of the accepted paper for evaluating the artifact-paper consistency.
  • A Markdown-formatted file providing an overview of the artifact. PLEASE USE THE AUTHORS TEMPLATE: https://doi.org/10.5281/zenodo.14975264
  • Artifact submissions will be handled through the HotCRP submission system at the following link: https://sle26ae.hotcrp.com

NOTE: The artifact can be associated with a different set of authors (different from the accepted paper).

Quality Criteria

Submitted artifacts will be evaluated by the AEC concerning the following criteria.
Depending on the criteria, different Badges are assigned (we limit us to the ‘evaluated’ and ‘available’ badge).

Artifact Evaluated (Badges)

There are two quality levels of the ‘Evaluated’ badge:

  • ‘Evaluated Functional’ assures minimal functionality.
  • ‘Evaluated Reusable’ award artifact that exceeds the minimal functionality and provides reproducible evidence.

Only one of ‘Artifact Evaluated’ badges will be awarded. The decision will be made by the AE Chairs.

Artifact Evaluated - Functional (Badge)

drawing

“The artifacts associated with the research are found to be documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation.”

  • Documented: At minimum, an inventory of artifacts is included, and sufficient description provided to enable the artifacts to be exercised.
  • Consistent: The artifacts are relevant to the associated paper, and contribute in some inherent way to the generation of its main results.
  • Complete: To the extent possible, all components relevant to the paper in question are included. (Proprietary artifacts need not be included. If they are required to exercise the package then this should be documented, along with instructions on how to obtain them. Proxies for proprietary data should be included so as to demonstrate the analysis.)
  • Exercisable: Included scripts and/or software used to generate the results in the associated paper can be successfully executed, and included data can be accessed and appropriately manipulated.

Artifact Evaluated - Reusable (Badge)

drawing

”The artifacts associated with the paper are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Artifacts Evaluated – Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.”

Artifact Available (Badge)

drawing

“Author-created artifacts relevant to this paper have been placed on a publically accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided.”

  • Identification: Using DOIs to identify published objects is standard. It is important to use a DOI that points to the specific version with which the results of the paper can be reproduced (for Zenodo: do not use the “always latest” DOI; for FigShare: use a DOI with a version suffix, e.g., “.v1”).
  • Long-Term Availability: It is necessary that the artifacts are archived in an archive that hosts the artifacts on a long-term basis, such as in digital libraries of the ACM, Zenodo, etc. (version repositories do not fulfill this requirement, as the hosting company could decide at any time to discontinue the service, as done by Google, for example: Google Code).
  • Immutability: It is necessary that the artifact cannot be changed after publication because the reader needs to use the material exactly as the authors did to obtain their result.

Important Dates

  • Artifact submission Deadline: 29.04.2026 (AoE)
  • Start Kick-the-tires author response period: 12.05.2026 (AoE)
  • End Kick-the-tires author response period: 18.05.2026 (AoE)
  • Author artifact Notification: 08.06.2026 (AoE)

Awards

The Distinguished Artifact award will be presented to the artifact that most significantly exceeds expectations. This recognition is determined by the AEC chairs based on the recommendations of the artifact evaluation committee.

Any Questions?

For further information on the artifact evaluation of SLE 2026, feel free to contact the artifact evaluation chairs.

Best regards, Théo Matricon and Georges Aaron Randrianaina.