Write a Blog >>
ICSE 2023
Sun 14 - Sat 20 May 2023 Melbourne, Australia

D-SyMLe: International Workshop on Dependability of Safety-Critical Systems with Machine Learned Components (May 20, 2023, in person)

Note: paper submission deadline extended to January 27, 2023

About D-SyMLe

Machine learning components are becoming an intrinsic part of safety-critical systems, from driver-assisted vehicles to medical imaging devices. Since undesired behaviours in such systems can lead to fatal accidents, the dependability of such systems is paramount for their broad and safe adoption. Dependability of systems includes studies of robustness, reliability, integrity, security, privacy, etc. However, contrary to traditional system development, we are ill-equipped to ensure the dependability of systems with learned components. For example, robustness testing of ML components may apply targeted noise to inputs, but that type of noise may never be encountered in a deployment context. Meanwhile, environments with unexpected features may affect learned components in unsafe ways.

ML-components are intrinsically different from traditional software in many ways: (1) they lack precise requirements (relying instead on proxies), lack accuracy; (2) they depend on data with multiple sources of provenance; (3) they rely on architectural considerations that have to do with the capacity to efficiently achieve accuracy and not on dependability; (4) they are implemented through an optimization process that is fraught with indeterminism and has many degrees of freedom with subtle dependencies; and (5) their performance does not come with a human-consumable explanation. These differences are creating development challenges that are crucial to the deployment of safety-critical systems with ML components that we are going to discuss through this workshop. The divergence of ML and SE development practices becomes an SE issue when ML components are incorporated into a system. ML components are inherently vulnerable, and features of safety-critical systems, such as fallback routines in the event of a failure, require an understanding of those vulnerabilities to be properly created and deployed. Because these features are often written during system development, the vulnerabilities of ML at that point become the system responsibility.


The primary goal of this workshop is to foster innovative ideas on the use of SE techniques for the dependability of safety-critical systems with ML components and to promote cross-fertilization of research with the most relevant underlying areas. This workshop will provide a forum for discussing the challenges associated with dependability of systems with ML components, identifying areas of responsibility for the SE community, strategizing solutions for SE-owned challenges, and incorporating the (potentially enhanced) SE practices into ML-based system development processes. We aim to bring together researchers from different SE communities, e.g., requirements, modeling, safety, testing and formal verification, together with ML experts to highlight ongoing work and present new ideas. Additionally, we aim to sketch out an ongoing research agenda for future work and promote cross-community collaborations.

Workshop Format

The workshop will contain the following components:

  • The first 30 minutes of the workshop will be a round of lightning presentations of all the participants to share their understanding and interests in the dependability of ML-based systems.

  • Then, there will be a set of invited speakers’ presentations to characterize the state-of-the-art solutions and pending challenges.

  • After all the presentations, and based on the interests of the participants, the workshop will split into discussion groups focused on different topics and emerging themes. The objective is to provide an opportunity for the participants to delve more deeply into the critical areas emerging from the collective presentations.

  • The workshop will end with a presentation of major insights gained during the workshop and a discussion of future directions by each discussion group.

Call for Papers

Prospective participants are invited to submit

  1. a short position or research paper (maximum 4 pages) or

  2. a short talk proposal, tool demonstration, industrial challenges, or an extended abstract of ongoing research (maximum 2 pages).

All submissions will be reviewed by members of the program committee and the organizing committee for quality and relevance.

All paper submissions to D-SyMLe must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf options).

Notice that submissions can be made for presentation only.

Topics of Interest include but not limited to:

  • Safety requirements and specification of ML components or ML-based safety-critical systems

  • Trust and trustworthiness of ML components or ML-based safety-critical systems

  • Explainability of ML components or ML-based safety-critical systems

  • Privacy and security of ML components or ML-based safety-critical systems

  • Robustness and reliability of ML components or ML-based safety-critical systems

  • Model-based safety analysis of ML components or ML-based safety-critical systems

  • Architectures to manage scale, uncertainty, and safety of ML components or ML-based safety-critical systems

  • Dataset development for ML components or ML-based safety-critical systems

  • Verification and validation methods of ML components or ML-based safety-critical systems

  • Safety and security guidelines, standards and certification of systems with ML components

  • Hazard analysis of ML-based safety-critical systems

  • Safety and security assurance cases of ML-based systems

  • Risk assessment and reduction of ML-based safety-critical systems

  • ML safety education and awareness

Important Dates

  • Paper submissions due: January 27, 2023 Deadline extended!

  • Notification to authors: February 24, 2023

  • Camera-ready copies due: March 17, 2023

Submission Site

EasyChair will be used to manage the submission review process. Access the submission link here.