Write a Blog >>
SLE 2017
Sun 22 - Fri 27 October 2017 Vancouver, Canada
co-located with SPLASH 2017

Software Language Engineering (SLE) is the application of systematic, disciplined, and measurable approaches to the development, use, deployment, and maintenance of software languages. The term “software language” is used broadly, and includes: general-purpose programming languages; domain-specific languages (e.g. BPMN, Simulink, Modelica); modeling and metamodeling languages (e.g. SysML and UML); data models and ontologies (e.g. XML-based and OWL-based languages and vocabularies).

Dates
You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 23 Oct

Displayed time zone: Tijuana, Baja California change

08:30 - 10:00
Keynote GPCE/SLESLE at Regency A+B
08:30
15m
Day opening
Opening
SLE
Benoit Combemale University of Rennes 1, Marjan Mernik University of Maribor, Bernhard Rumpe RWTH Aachen University, Germany
Media Attached
08:45
75m
Talk
GPCE Keynote: The Landscape of Refactoring Research in the Last Decade, Danny DigKeynote
SLE
Danny Dig School of EECS at Oregon State University
10:30 - 12:00
ParsingSLE at Regency B
Chair(s): Ralf Laemmel University of Koblenz-Landau, Germany
10:30
25m
Talk
Type-Safe Modular ParsingArtifact Evaluation
SLE
Haoyuan Zhang , Huang Li , Bruno C. d. S. Oliveira University of Hong Kong, China
DOI
10:55
25m
Talk
Incremental Packrat ParsingArtifact Evaluation
SLE
Patrick Dubroy Y Combinator Research, USA, Alessandro Warth Y Combinator Research, USA
DOI
11:20
25m
Talk
A Symbol-Based Extension of Parsing Expression Grammars and Context-Sensitive Packrat Parsing
SLE
Kimio Kuramitsu Yokohama National University, Japan
DOI
11:45
15m
Talk
Red Shift: Procedural Shift-Reduce ParsingVision Paper
SLE
Nicolas Laurent Université Catholique de Louvain, Belgium
DOI
13:30 - 15:00
Textual ModelsSLE at Regency B
Chair(s): Anthony Sloane Macquarie University
13:30
23m
Talk
Towards a Taxonomy of Grammar Smells
SLE
Mats Stijlaart Universiteit van Amsterdam, Vadim Zaytsev Raincode Labs, Belgium
DOI
13:53
22m
Talk
Deep Priority Conflicts in the Wild: A Pilot StudyArtifact Evaluation
SLE
Luis Eduardo de Souza Amorim Delft University of Technology, Netherlands, Michael J. Steindorfer Delft University of Technology, Eelco Visser Delft University of Technology
DOI
14:16
22m
Talk
Virtual Textual Model Composition for Supporting Versioning and Aspect-OrientationArtifact Evaluation
SLE
Robert Bill Vienna University of Technology, Patrick Neubauer University of York, UK, Manuel Wimmer TU Wien
DOI
14:38
22m
Talk
Robust Projectional EditingArtifact Evaluation
SLE
Friedrich Steimann Fernuniversität, Marcus Frenkel Fernuni Hagen, Markus Voelter itemis
DOI
15:30 - 17:00
DSLsSLE at Regency B
Chair(s): Jurgen Vinju Centrum Wiskunde & Informatica / Technische Universiteit Eindhoven
15:30
25m
Talk
Debugging with Domain-Specific Events via MacrosDistinguished PaperArtifact Evaluation
SLE
Xiangqi Li University of Utah, Matthew Flatt University of Utah
DOI
15:55
25m
Talk
A Chrestomathy of DSL implementations
SLE
Simon Schauss University of Koblenz-Landau, Ralf Laemmel University of Koblenz-Landau, Germany, Johannes Härtel University of Koblenz-Landau, Germany, Marcel Heinz University of Koblenz-Landau, Germany, Kevin Klein University of Koblenz-Landau, Lukas Härtel University of Koblenz-Landau, Germany, Thorsten Berger Chalmers University of Technology, Sweden / University of Gothenburg, Sweden
DOI
16:20
25m
Talk
A Requirements Engineering Approach for Usability-Driven DSL DevelopmentArtifact Evaluation
SLE
Ankica Barisic NOVA-LINCS - Universidade Nova de Lisboa, Dominique Blouin LTCI Lab, Telecom ParisTech, Université Paris-Saclay, Vasco Amaral NOVA-LINCS, FCT/UNL, Miguel Goulao NOVA-LINCS, FCT/UNL
DOI
16:45
25m
Talk
Better Call the Crowd. Using Crowdsourcing to Shape the Notation of Domain-Specific Languages
SLE
Marco Brambilla Politecnico di Milano, Jordi Cabot ICREA - UOC, Javier Luis Cánovas Izquierdo IN3 - UOC, Andrea Mauri Politecnico di Milano, Italy
DOI

Tue 24 Oct

Displayed time zone: Tijuana, Baja California change

08:30 - 10:00
Keynote GPCE/SLESLE at Regency A+B
08:30
15m
Day opening
Awards
SLE
Marjan Mernik University of Maribor, Bernhard Rumpe RWTH Aachen University, Germany, Laurence Tratt King's College London, Tanja Mayerhofer TU Wien
File Attached
08:45
75m
Talk
SLE Keynote: Engineering meta-languages for specifying software languagesKeynote
SLE
Peter D. Mosses Swansea University
DOI File Attached
10:30 - 12:00
GrammarsSLE at Regency B
Chair(s): Bernhard Rumpe RWTH Aachen University, Germany
10:30
25m
Talk
A Formalisation of Parameterised Reference Attribute GrammarsArtifact Evaluation
SLE
Scott Buckley Macquarie University, Australia, Anthony Sloane Macquarie University
DOI
10:55
25m
Talk
Concurrent Circular Reference Attribute GrammarsArtifact Evaluation
SLE
Jesper Oqvist Lund University, Görel Hedin
DOI
11:20
25m
Talk
Ensuring Non-interference of Composable Language Extensions
SLE
Ted Kaminski University of Minnesota, Eric Van Wyk University of Minnesota, USA
DOI
11:45
15m
Talk
A Domain-Specific Controlled English Language for Automated Regulatory ComplianceIndustrial PaperArtifact Evaluation
SLE
Suman Roychoudhury Tata Consultancy Services Research, Sagar Sunkle Tata Consultancy Services Research, Deepali Kholkar Tata Consultancy Services Research, Vinay Kulkarni Tata Consultancy Services Research
DOI
13:30 - 15:00
Meta-modellingSLE at Regency B
Chair(s): Marjan Mernik University of Maribor
13:30
23m
Talk
Concrete Syntax: A Multi-paradigm Modelling Approach
SLE
Yentl Van Tendeloo University of Antwerp, Simon Van Mierlo University of Antwerp, Bart Meyers University of Antwerp, Belgium, Hans Vangheluwe University of Antwerp and McGill University
DOI
13:53
23m
Talk
Structural Model Subtyping with OCL ConstraintsArtifact Evaluation
SLE
Artur Boronat University of Leicester
DOI
14:16
22m
Talk
Comparison of the Expressiveness and Performance of Template-Based Code Generation Tools
SLE
Lechanceux Luhunu University of Montreal, Eugene Syriani University of Montreal
DOI
14:38
22m
Talk
Tool Demonstration: A development environment for the Alf language within the MagicDraw UML toolTool Demo
SLE
Ed Seidewitz nMeta LLC
DOI
15:30 - 17:00
GPL/DSL implementationSLE at Regency B
Chair(s): Eric Van Wyk University of Minnesota, USA
15:30
25m
Talk
FlowSpec: Declarative Dataflow Analysis Specification
SLE
Jeff Smits Delft University of Technology, Netherlands, Eelco Visser Delft University of Technology
DOI File Attached
15:55
25m
Talk
Metacasanova: An Optimized Meta-compiler for Domain-Specific Languages
SLE
Francesco Di Giacomo Università Ca' Foscari, Mohamed Abbadi Hogeschool Rotterdam, Agostino Cortesi Università Ca' Foscari Venezia, Pieter Spronck Tilburg University, Giuseppe Maggiore Hogeschool Rotterdam
DOI
16:20
25m
Talk
Robust Programs with Filtered IteratorsArtifact EvaluationDistinguished Artifact
SLE
Jiasi Shen Massachusetts Institute of Technology, Martin C. Rinard Massachusetts Institute of Technology
DOI
16:45
25m
Talk
An Introduction to the Software Language Engineering Body of Knowledge
SLE
Vadim Zaytsev Raincode Labs, Belgium
17:10
10m
Talk
Energy Efficiency across Programming Languages: How do Energy, Time, and Memory Relate?
SLE
Rui Pereira HASLab/INESC TEC & Universidade do Minho, Marco Couto HASLab/INESC TEC & Universidade do Minho, Francisco Ribeiro HASLab/INESC TEC & Universidade do Minho, Rui Rua HASLab/INESC TEC & Universidade do Minho, Jácome Cunha NOVA-LINCS - Universidade Nova de Lisboa, João Paulo Fernandes Release/LISP, CISUC, João Saraiva University of Minho, Portugal
DOI Media Attached
18:00 - 20:00
18:00
2h
Dinner
Dinner (registration add-on)
SLE

Accepted Papers

Title
A Chrestomathy of DSL implementations
SLE
DOI
A Domain-Specific Controlled English Language for Automated Regulatory ComplianceIndustrial PaperArtifact Evaluation
SLE
DOI
A Formalisation of Parameterised Reference Attribute GrammarsArtifact Evaluation
SLE
DOI
A Requirements Engineering Approach for Usability-Driven DSL DevelopmentArtifact Evaluation
SLE
DOI
A Symbol-Based Extension of Parsing Expression Grammars and Context-Sensitive Packrat Parsing
SLE
DOI
Better Call the Crowd. Using Crowdsourcing to Shape the Notation of Domain-Specific Languages
SLE
DOI
Comparison of the Expressiveness and Performance of Template-Based Code Generation Tools
SLE
DOI
Concrete Syntax: A Multi-paradigm Modelling Approach
SLE
DOI
Concurrent Circular Reference Attribute GrammarsArtifact Evaluation
SLE
DOI
Debugging with Domain-Specific Events via MacrosDistinguished PaperArtifact Evaluation
SLE
DOI
Deep Priority Conflicts in the Wild: A Pilot StudyArtifact Evaluation
SLE
DOI
Energy Efficiency across Programming Languages: How do Energy, Time, and Memory Relate?
SLE
DOI Media Attached
Ensuring Non-interference of Composable Language Extensions
SLE
DOI
FlowSpec: Declarative Dataflow Analysis Specification
SLE
DOI File Attached
Incremental Packrat ParsingArtifact Evaluation
SLE
DOI
Metacasanova: An Optimized Meta-compiler for Domain-Specific Languages
SLE
DOI
Red Shift: Procedural Shift-Reduce ParsingVision Paper
SLE
DOI
Robust Programs with Filtered IteratorsArtifact EvaluationDistinguished Artifact
SLE
DOI
Robust Projectional EditingArtifact Evaluation
SLE
DOI
Structural Model Subtyping with OCL ConstraintsArtifact Evaluation
SLE
DOI
Tool Demonstration: A development environment for the Alf language within the MagicDraw UML toolTool Demo
SLE
DOI
Towards a Taxonomy of Grammar Smells
SLE
DOI
Type-Safe Modular ParsingArtifact Evaluation
SLE
DOI
Virtual Textual Model Composition for Supporting Versioning and Aspect-OrientationArtifact Evaluation
SLE
DOI

Call for Papers

Topics of Interest

SLE aims to be broad-minded and inclusive about relevance and scope. We solicit high-quality contributions in areas ranging from theoretical and conceptual contributions to tools, techniques, and frameworks in the domain of language engineering. Topics relevant to SLE cover generic aspects of software languages development rather than aspects of engineering a specific language. In particular, SLE is interested in principled engineering approaches and techniques in the following areas:

  • Language Design and Implementation
    • Approaches and methodologies for language design
    • Static semantics (e.g., design rules, well-formedness constraints)
    • Techniques for behavioral / executable semantics
    • Generative approaches (incl. code synthesis, compilation)
    • Meta-languages, meta-tools, language workbenches
  • Language Validation
    • Verification and formal methods for languages
    • Testing techniques for languages
    • Simulation techniques for languages
  • Language Integration and Composition
    • Coordination of heterogeneous languages and tools
    • Mappings between languages (incl. transformation languages)
    • Traceability between languages
    • Deployment of languages to different platforms
  • Language Maintenance
    • Software language reuse
    • Language evolution
    • Language families and variability
  • Domain-specific approaches for any aspects of SLE (design, implementation, validation, maintenance)
  • Empirical evaluation and experience reports of language engineering tools
    • User studies evaluating usability
    • Performance benchmarks
    • Industrial applications

Types of Submissions

  • Research papers: These should report a substantial research contribution to SLE or successful application of SLE techniques or both. Full paper submissions must not exceed 12 pages including bibliography (in ACM SIGPLAN conference style - acmart).

  • Tool papers: Because of SLE’s interest in tools, we seek papers that present software tools related to the field of SLE. Selection criteria include originality of the tool, its innovative aspects, and relevance to SLE. Any of the SLE topics of interest are appropriate areas for tool demonstrations. Submissions must provide a tool description of 4 pages including bibliography (in ACM SIGPLAN conference style - acmart), and a demonstration outline including screenshots of up to 6 pages. Tool demonstrations must have the keywords “Tool Demo” or “Tool Demonstration” in the title. The 4-page tool description will, if the demonstration is accepted, be published in the proceedings. The 6-page demonstration outline will be used by the program committee only for evaluating the submission.

  • Industrial papers: These should describe real-world application scenarios of SLE in industry, explained in their context with an analysis of the challenges that were overcome and the lessons which the audience can learn from this experience. Industry paper submissions must not exceed 6 pages including bibliography (in ACM SIGPLAN conference style - acmart).

  • New ideas / vision papers: New ideas papers should describe new, non-conventional SLE research approaches that depart from standard practice. They are intended to describe well-defined research ideas that are at an early stage of investigation. Vision papers are intended to present new unifying theories about existing SLE research that can lead to the development of new technologies or approaches. New ideas / vision papers must not exceed 4 pages including bibliography (in ACM SIGPLAN conference style - acmart).

Workshops: Workshops will be organized by SPLASH. Please inform us and contact the SPLASH organizers if you would like to organize a workshop of interest to the SLE audience.

Artifact Evaluation

For the second year SLE will use an evaluation process for assessing the quality of the artifacts on which papers are based to foster the culture of experimental reproducibility. Authors of accepted papers are invited to submit artifacts. For more information, have a look at the Artifact Evaluation page.

Submission

Submissions will be accepted at https://sle17.hotcrp.com.

Publications

All submitted papers will be reviewed by at least three members of the program committee. All accepted papers, including tool papers will be published in ACM Digital Library.

AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

Journal Special Issue

Selected accepted papers will be invited to a special issue of the Computer Languages, Systems and Structures (COMLAN) journal.

Awards

  • Distinguished paper. Award for most notable paper, as determined by the PC chairs based on the recommendations of the programm committee.

  • Distinguished reviewer. Award for distinguished reviewer, as determined by the PC chairs using feedback from the authors.

  • Distinguished artifact. Award for the artifact most significantly exceeding expectations, as determined by the AEC chairs based on the recommendations of the artifact evaluation committee.

Concurrent Submissions

Papers must describe unpublished work that is not currently submitted for publication elsewhere as described by SIGPLAN’s Republication Policy. Submitters should also be aware of ACM’s Policy and Procedures on Plagiarism.

Format

(NOTE: NEW FORMAT REQUIREMENTS FOR SLE 2017)

Submissions should use the ACM SIGPLAN Conference Format acmart, 10 point font, with the font family Times New Roman. All submissions should be in PDF format. If you use LaTeX or Word, please use the provided ACM SIGPLAN Templates provided here. Otherwise, follow the author instructions. SLE also follows a single-blind review process.

For authors using LaTeX, you will need to use the acmart LaTeX class (instead of sigplanconf used in the past), with the “sigplan” option in the template. Note that submissions should have a 10 point font. If you are formatting your paper using Word, you may wish to use the provided Word template that supports this font size. Please include page numbers in your submission. Setting the preprint option in the LaTeX \documentclass command generates page numbers. Please also ensure that your submission is legible when printed on a black and white printer. In particular, please check that colors remain distinct and font sizes are legible.

More Information

For fairness reasons, all submitted papers should conform to the above instructions. Submissions that violate these instructions may be rejected without review, at the discretion of the Program Chair.

For additional information, clarification, or answers to questions please contact the Program Chairs.

Engineering meta-languages for specifying software languages

Peter D. Mosses

The programming and modelling languages currently used in software engineering generally have plenty of tool support. But although their syntax is specified using formal grammars or meta-models, complete formal semantic specifications are seldom provided.

The difficulty of reuse of parts of semantic specifications, and of co-evolution of such specifications with languages, are significant drawbacks for practical use of formal semantics. I have collaborated in the development of several meta-languages for semantic specification, aiming to eliminate such drawbacks: action semantics, and modular variants of structural operational semantics (MSOS, I-MSOS); this led to the PLanCompS project and to CBS, a meta-language for component-based semantics.

The components of language specifications in CBS correspond to so-called fundamental programming constructs (funcons). The main feature of CBS is that each funcon is defined once and for all: the addition of new funcons does not require any changes to previous definitions, and behavioural laws are preserved. In contrast to software packages, the definition of each funcon has to remain fixed after its publication.

As well as explaining how component-based semantics achieves these desirable pragmatic properties, and comparing its features with those of some other meta-languages, I will demonstrate the current tool support for CBS, which is implemented in Spoofax.

SLE will for the second year use an evaluation process for assessing the quality of artifacts on which papers are based. The aim of this evaluation process is to foster a culture of experimental reproducibility as well as a higher quality in the research area as a whole.

Authors of papers accepted for SLE 2017 will be invited to submit artifacts. Any kind of artifact that is presented in the paper, supplements the paper with further details, or underlies the paper can be submitted. This includes, for instance, tools, grammars, metamodels, models, programs, algorithms, scripts, proofs, datasets, statistical tests, checklists, surveys, interview scripts, visualizations, annotated bibliographies, and tutorials.

The submitted artifacts will be reviewed by a dedicated Artifact Evaluation Committee (AEC). Artifacts that live up to the expectations created by the paper will receive a badge of approval from the AEC. The approved artifacts will be invited for inclusion in the electronic conference proceedings published in the ACM Digital Library. This will ensure the permanent and durable storage of the artifacts alongside the published papers fostering the repeatability of experiments, enabling precise comparison with alternative approaches, and helping the dissemination of the author’s ideas in detail.

The AEC will award the artifact that most significantly exceeds the expectations with a Distinguished Artifact Award.

Participating in the artifact evaluation and publishing approved artifacts in the ACM Digital Library is voluntary. However, we strongly encourage authors to consider this possibility as the availability of artifacts will greatly benefit readers of papers and increase the impact of the work. Note that the artifact evaluation cannot affect the acceptance of the paper, because it only happens after the decision about acceptance has been made.

The artifact evaluation process of SLE borrows heavily from processes described at artifact-eval.org, ECOOP 2016 and ICSME 2016. The applied process is detailed in the following.

Submission

If and when your paper has been accepted for SLE 2017, you will be invited by the AEC chairs to submit the artifacts that underlie your work. This invitation will contain detailed instructions on how to submit your artifacts.

An artifact submission comprises the following components:

  • Paper: Preliminary PDF version of the accepted SLE 2017 paper. The paper will be used to evaluate the consistency of the accepted paper and the submitted artifact, as well as to assess whether the artifact lives up to the expectations created by the paper.
  • Authors of the artifact: This list may include people who are not authors of the accepted paper, but contributed to creating the artifact.
  • Abstract: A short description of the artifact to be used for assignments of artifacts to AEC members.
  • Artifact: An archive file (gz, xz, or zip) containing everything needed for supporting a full evaluation of the artifact. The archive file has to include at least the artifact itself and a text file “README.txt” that contains the following information:
    • An overview of the archive file documenting the content of the archive.
    • A setup / installation guide giving detailed instructions on how to setup or install the submitted artifact.
    • Detailed step-by-step instructions on how to reproduce any experiments or other activities that support the conclusions given in the paper.

If multiple artifacts underlie an accepted SLE paper, all artifacts should be collected in one archive and submitted together in one single submission. For instance, if a tool has been developed, a tutorial has been authored with detailed instructions on how to use the tool, and user studies have been performed for evaluating the tool’s properties, the tool, the tutorial, and the raw data collected in the user study should be packed in one archive file and submitted together in one single submission to the SLE 2017 artifact evaluation.

When preparing your artifact, consider that your artifact should be as accessible to the AEC as possible. In particular, it should be possible for the AEC to quickly make progress in the investigation of your artifact. Please provide some simple scenarios describing concretely how the artifact is intended to be used. For a tool, this would include specific inputs to provide or actions to take, and expected output or behavior in response to this input.

For artifacts that are tools, it is recommended to provide the tool installed and ready to use on a virtual machine for VirtualBox, VMware, SHARE or a similar widely available platform.

Please use widely supported open formats for documents (e.g., PDF, HTML) and data (e.g., CSV, JSON).

Evaluation Process

Submitted artifacts will be evaluated by the AEC concerning the following criteria: Artifacts should be

  • consistent with the paper,
  • as complete as possible,
  • well documented, and
  • easy to (re)use facilitating further research.

Each submitted artifact will be evaluated by at least two members of the AEC. Thereby, the artifacts will be treated confidentially, just like the submitted paper.

Artifacts that pass the evaluation will receive an “Artifact Evaluated - Functional” badge and be invited for inclusion in the electronic conference proceedings published in the ACM Digital Library. Artifacts that will be included in the ACM Digital Library or that will be made permanently available in another publicly accessible archival repository will also receive the “Artifact Available” badge. Detailed definitions of these badges and the respective evaluation criteria may be found at the ACM Artifact Review Badging site.

The evaluation consists of two steps:

  1. Kicking-the-tires: Reviewers will check the artifact’s integrity and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.). In case of any problems, authors will be given a 48-hour period (August 17-18) to read and respond to the kick-the-tires reports of their artifacts and solve any issues preventing the artifact evaluation.
  2. Artifact assessment: Reviewers evaluate the artifacts and decide on the approval of the artifact.

Notification about the outcome of the artifact evaluation and reviews including suggestions for improving the artifacts will be distributed about one week before the deadline for the final version of the research paper, such that the outcome can be mentioned in the paper and the final artifact can be uploaded for inclusion in the ACM Digital Library.

Important Dates

  • August 10, 2017: Artifact submission
  • August 17-18, 2017: Kick-the-tires author response
  • September 1, 2017: Artifact notification

Artifact Evaluation Chairs

Artifact Evaluation Committee

to be announced

Further Information

For further information on the artifact evaluation of SLE 2017, feel free to contact the artifact evaluation chairs with an e-mail to sle2017ae@googlegroups.com.