2nd International Workshop on Responsible AI Engineering (RAIE’24)
Theme & Goals
The recent release of ChatGPT, Bard, and other large language model (LLM)-based chatbots has drawn huge global attention. The black box nature and the rapid advancements in AI have sparked significant concerns about responsible AI. It is crucial to ensure that the AI systems are developed and used responsibly throughout their entire lifecycle and trusted by humans who are expected to use them and rely on them.
A number of AI ethics principles have been published recently, which AI systems should conform to. Some consensus around AI ethics principles has begun to emerge. A principle-based approach allows technology-neutral, future-proof and context-specific interpretations and operationalisation. However, high-level AI ethics principles are far from ensuring trustworthy and responsible AI systems. There is a significant gap between high-level AI ethics principles and low-level concrete engineering solutions. Without further concrete methods and tools, practitioners are left with nothing much beyond truisms. For example, it is a very challenging and complex task to operationalise the human-centered value principle regarding how it can be designed for, implemented and monitored throughout the entire lifecycle of AI systems. Trustworthy and responsible AI challenges can occur at any stage of the AI system development lifecycle, crosscutting AI components, non-AI components, and data components of systems. New and improved software engineering approaches are required to ensure that the AI systems developed are trustworthy throughout the entire lifecycle and trusted by those who use and rely on them. To ensure the enforcement of responsible AI requirements, the requirements need to be measurable, verifiable, and monitorable. Also, we need assessment mechanisms and engineering tools to systematically support the implementation of responsible AI requirements across all phases of AI application development, maintenance, and operations.
Achieving responsible AI engineering (i.e., building adequate software engineering tools to support responsible engineering of AI systems) requires a good understanding of human expectations and the utilisation context of AI systems. Hence, the aim of this workshop is to bring together researchers and practitioners not only in software engineering and AI, but also social scientists and regulatory bodies to build up a community that will target the AI engineering challenges that practitioners are facing in developing AI systems responsibly. In this workshop, we are looking for cutting-edge software/AI engineering methods, techniques, tools and real-world case studies that can help operationalise responsible AI.
Tue 16 AprDisplayed time zone: Lisbon change
10:30 - 11:00 | |||
10:30 30mCoffee break | Break Catering |
12:30 - 14:00 | |||
12:30 90mLunch | Lunch Catering |
Accepted Papers
Call for Papers
Topics of Interest
Topics of interests include, but are not limited to:
- Requirement engineering for responsible AI
- Software architecture and design of responsible AI systems
- Verification and validation for responsible AI systems
- DevOps, MLOps, MLSecOps, LLMOps for responsible AI systems
- Development processes for responsible AI systems
- Responsible AI governance, assessment tools/techniques
- Reproducibility and traceability of AI systems
- Trust and trustworthiness of AI systems
- Human aspect of responsible AI engineering
- Responsible AI engineering for next-generation foundation model based AI systems (e.g., LLM-based)
- Regulatory and policy implications
- Education and training in responsible AI
The workshop will be highly interactive, including invited keynotes/talks, paper presentations for different topics in the area of responsible AI engineering.
Two types of contributions will be considered:
- A research or experience full paper with 8 pages max, including references. Papers describing the challenges, starting results, vision papers, or the experience papers from or in cooperation with the practitioners are encouraged.
- A short research or experience paper with 4 pages max, including references. The same topics as for long papers.
Submission Guidelines
All authors should use the official “ACM Primary Article Template”, as can be obtained from the ACM Proceedings Template page. LaTeX users should use the sigconf
option and the review
(to produce line numbers for easy reference by the reviewers) option. To that end, the following LaTeX code can be placed at the start of the LaTeX document:
\documentclass[sigconf,review]{acmart}
\acmConference[ICSE 2024]{46th International Conference on Software Engineering}{April 2024}{Lisbon, Portugal}
- All submissions must be in English and in PDF format. Papers must not exceed the specified page limits. It is NOT possible to pay for extra pages.
- All submitted papers will be reviewed on the basis of technical quality, relevance, significance, and clarity by the program committee. Please note, RAIE24 will employ a single-anonymous review process, i.e., in the submission the authors should be specified.
- Other more detailed submission policies and guidelines for RAIE’24 are in line with the ICSE Research track Submission Process. Please note all papers must follow the ACM formatting instructions.
The official publication date of the workshop proceedings is the date the proceedings are made available in the ACM Library. This date may be up to two weeks prior to the first day of ICSE 2024. The official publication date affects the deadline for any patent filings related to published work.