3rd International Workshop on Responsible AI EngineeringRAIE 2025
Theme & Goals
The rapid advancements in AI, particularly the release of large language models (LLMs) and their applications, have attracted significant global interest and raised substantial concerns on responsible AI and AI safety. While LLMs are impressive examples of AI models, it is the compound AI systems, which integrate these models with other key components for function and quality/risk control, that are ultimately deployed and have real-world impact. These AI systems, especially autonomous LLM agents and those involving multi-agent interacting, require careful system-level engineering to ensure responsible AI and AI safety.
In recent years, numerous regulations, principles, and guidelines for responsible AI and AI safety have been issued by governments, research organizations, and enterprises. However, they are typically very high-level and do not provide concrete guidance for technologists on how to implement responsible and safe AI. Developing responsible AI systems goes beyond fixing traditional software code “bugs” and providing theoretical guarantees for algorithms. New and improved software/AI engineering approaches are required to ensure that AI systems are trustworthy and safe throughout their entire lifecycle and trusted by those who use and rely on them.
Diversity and inclusion principles in AI are crucial for ensuring that the technology fairly represents and benefits all segments of society, preventing biases that can lead to discrimination and inequality. By incorporating diverse perspectives within data, process, system, and governance of the AI eco-system, AI systems can be more innovative, ethical, and effective in addressing the needs of diverse and especially under-represented users. This commitment to diversity and inclusion also ensures responsible and ethical AI development by fostering transparency, accountability, and trustworthiness, thereby safeguarding against unintended harmful consequences and promoting societal well-being.
Achieving responsible AI engineering—building adequate software engineering tools to support the responsible development of AI systems—requires a comprehensive understanding of human expectations and the utilization context of AI systems. This workshop aims to bring together researchers and practitioners not only in software engineering and AI but also ethicists, and experts from social sciences and regulatory bodies to build a community that will tackle the responsible/safe AI engineering challenges practitioners face in developing responsible and safe AI systems. Traditional software engineering methods are not sufficient to tackle the unique challenges posed by advanced AI technologies. This workshop will provide valuable insights into how software engineering can evolve to meet these challenges, focusing on aspects such as requirement engineering, architecture and design, verification and validation, and operational processes like DevOps and AgentOps. By bringing together experts from various fields, the workshop aims to foster interdisciplinary collaboration that will drive the advancement of responsible AI and AI safety engineering practices.
The primary objectives of this workshop are to:
- Share cutting-edge software/AI engineering methods, techniques, tools, and real-world case studies that can help ensure responsible AI and AI safety.
- Facilitate discussions among researchers and practitioners from diverse fields, including software engineering, AI, ethics, social sciences, and regulatory bodies, to address the responsible AI and AI safety engineering challenges.
- Promote the development of new and improved software/AI engineering approaches to ensure AI systems are trustworthy and trusted throughout their lifecycle.
This program is tentative and subject to change.
Tue 29 AprDisplayed time zone: Eastern Time (US & Canada) change
07:00 - 19:00 | |||
09:00 - 10:30 | |||
09:00 10mDay opening | Opening Remarks RAIE Qinghua Lu Data61, CSIRO | ||
09:10 50mKeynote | Keynote 1 by Rick Kazman RAIE Rick Kazman University of Hawai‘i at Mānoa | ||
10:00 15mTalk | Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis RAIE Jonathan Brokman Fujitsu Research, Omer Hofman Fujitsu Research, Oren Rachmil Fujitsu Research, Inderjeet Singh Fujitsu Research, Vikas Pahuja Fujitsu Research, Aishvariya Priya Rathina Sabapathy Fujitsu Research, Amit Giloni Fujitsu Research, Roman Vainshtein Fujitsu Research, Hisashi Kojima Fujitsu Research | ||
10:15 12mTalk | Mitigating Values Debt in Generative AI: Responsible Engineering with Graph RAG RAIE Waqar Hussain Data61, CSIRO |
10:30 - 11:00 | |||
10:30 30mBreak | Tuesday Morning Break Catering |
11:00 - 12:30 | |||
11:00 15mTalk | Using Drift Planning to Improve Safety of Visual Navigation in Unmanned Aerial Vehicles RAIE Jeffrey Hansen Carnegie Mellon Software Engineering Institute, Sebastian Echeverria Carnegie Mellon Software Engineering Institute, Lena Pons Carnegie Mellon Software Engineering Institute, Lihan Zhan Carnegie Mellon Software Engineering Institute, Gabriel A. Moreno Carnegie Mellon University Software Engineering Institute, Grace Lewis Carnegie Mellon Software Engineering Institute | ||
11:15 15mTalk | LLM-AQuA-DiVeR: LLM-Assisted Quality Assurance Through Dialogues on Verifiable Specification with Requirement Owners RAIE Shohei Mitani Georgetown University, Salonee Moona Triple Point Security, Shinichiro Matsuo Georgetown University, Eric Burger Virginia Tech | ||
11:30 12mTalk | Towards Ensuring Responsible AI for Medical Device Certification RAIE Giulio Mallardi University of Bari, Luigi Quaranta University of Bari, Italy, Fabio Calefato University of Bari, Filippo Lanubile University of Bari | ||
11:42 12mTalk | Navigating the landscape of AI test methods using taxonomy-based selection RAIE Maximilian Pintz Fraunhofer Institute for Intelligent Analysis and Information Systems, University of Bonn, Anna Schmitz Fraunhofer Institute for Intelligent Analysis and Information Systems, Rebekka Görge Fraunhofer Institute for Intelligent Analysis and Information Systems, Sebastian Schmidt Fraunhofer Institute for Intelligent Analysis and Information Systems, Daniel Becker , Maram Akila Fraunhofer Institute for Intelligent Analysis and Information Systems, Lamarr Institute, Michael Mock Fraunhofer Institute for Intelligent Analysis and Information Systems | ||
11:54 12mTalk | Responsible AI in the Software Industry: A Practitioner-Centered Perspective RAIE Matheus de Morais Leça University of Calgary, Mariana Pinheiro Bento University of Calgary, Ronnie de Souza Santos University of Calgary Pre-print | ||
12:06 12mTalk | The Privacy Pillar - A Conceptual Framework for Foundation Model-based Systems RAIE Tingting Bi The University of Melbourne, Guangsheng Yu University of Technology Sydney, Qin Wang CSIRO Data61 |
12:30 - 14:00 | |||
12:30 90mLunch | Tuesday Lunch Catering |
14:00 - 15:30 | |||
14:00 50mKeynote | Keynote 2 by David Lo RAIE David Lo Singapore Management University | ||
14:50 15mTalk | Security of AI Agents RAIE Yifeng He University of California, Davis, Ethan Wang University of California at Davis, Yuyang Rong University of California, Davis, Zifei Cheng University of California, Davis, Hao Chen University of California at Davis | ||
15:05 15mTalk | Raising AI Ethics Awareness: Insights from a Quiz-Based Workshop with Software Practitioners – An Experience Report RAIE |
15:30 - 16:00 | |||
15:30 30mBreak | Tuesday Afternoon Break Catering |
16:00 - 17:30 | Session 4RAIE at 207 Chair(s): Apostol Vassilev National Institute of Standards and Technology, Muneera Bano CSIRO's Data61 | ||
16:00 15mTalk | Towards Responsible AI in Education: Hybrid Recommendation System for K-12 Students Case Study RAIE Nazarii Drushchak SoftServe Inc., Vladyslava Tyshchenko SoftServe Inc., Nataliya Polyakovska SoftServe Inc. Pre-print | ||
16:15 12mShort-paper | Compliance Made Practical: Translating the EU AI Act into Implementable Actions RAIE Niklas Bunzel Fraunhofer Institute for Secure Information Technology | ||
16:27 15mTalk | Leveraging Existing Road-Vehicle Standards to address EU AI Act Compliance RAIE Shanza Ali Zafar Fraunhofer IKS, Jessica Kelly Fraunhofer IKS, Lena Heidemann Fraunhofer IKS, Núria Mata Fraunhofer IKS | ||
16:42 3mBreak | Mini-break RAIE | ||
16:45 35mPanel | Panel Discussion - Diversity and Inclusion in AI (Chaired by Muneera Bano) RAIE Muneera Bano CSIRO's Data61 | ||
17:20 10mDay closing | Closing Remarks RAIE Qinghua Lu Data61, CSIRO |
19:00 - 22:00 | |||
Accepted Papers
Call for Papers
Topics of interests include, but are not limited to:
- Requirement engineering for responsible AI and AI safety
- Responsible-AI-by-design and AI-safety-by-design software architecture
- Verification and validation for responsible AI and AI safety
- DevOps, MLOps, LLMOps, AgentOps for ensuring responsible AI and AI safety
- Development processes for responsible and safe AI systems
- Responsible AI and AI safety evaluation tools and techniques
- Reproducibility and traceability of AI systems
- Trust and trustworthiness of AI systems
- Responsible AI and AI safety governance
- Diversity and Inclusion in the Responsible AI ecosystem; humans, data, processes/algorithms, systems, governance
- Operationalization of laws (e.g., EU AI Act) and standards
- Human aspect of responsible AI and AI safety engineering
- Responsible AI and AI safety engineering for next-generation foundation model-based AI systems (e.g., LLM-based agents)
- Case studies from certain high-priority domains (e.g., financial services, scientific discovery, health, environment, energy)
The workshop will be highly interactive, including invited keynotes/talks, paper presentations for different topics in the area of responsible AI engineering.
Two types of contributions will be considered:
- A research or experience full paper with 8 pages max, including references. Papers describing the challenges, starting results, vision papers, or the experience papers from or in cooperation with the practitioners are encouraged.
- A short research or experience paper with 4 pages max, including references. The same topics as for long papers.
Submission Guidelines
Submissions must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran}
without including the compsoc or compsocconf options).
- The RAIE 2025 will employ a single-anonymous review process, so authors should include their names and affiliations in the submission.
- All submissions must conform to the specified page length above. All submissions must be in PDF.
- Submissions must strictly conform to the IEEE conference proceedings formatting instructions specified above. Alterations of spacing, font size, and other changes that deviate from the instructions may result in desk rejection without further review.
- For other submission policies such as research involving human participants/subjects and ORCID, please refer to ICSE 2025 Technical Track Submission Process.