4th International Workshop on Responsible AI Engineering RAIE 2026
Theme & Goals
The rapid advancements in AI, particularly the release of large language models (LLMs) and their applications, have attracted significant global interest and raised substantial concerns on responsible AI and AI safety. While LLMs are impressive examples of AI models, it is the compound AI systems, which integrate these models with other key components for function and quality/risk control, that are ultimately deployed and have real-world impact. These AI systems, especially autonomous LLM agents and those involving multi-agent interacting, require careful system-level engineering to ensure responsible AI and AI safety.
In recent years, numerous regulations, principles, and guidelines for responsible AI and AI safety have been issued by governments, research organizations, and enterprises. However, they are typically very high-level and do not provide concrete guidance for technologists on how to implement responsible and safe AI. Developing responsible AI systems goes beyond fixing traditional software code “bugs” and providing theoretical guarantees for algorithms. New and improved software/AI engineering approaches are required to ensure that AI systems are trustworthy and safe throughout their entire lifecycle and trusted by those who use and rely on them.
Diversity and inclusion principles in AI are crucial for ensuring that the technology fairly represents and benefits all segments of society, preventing biases that can lead to discrimination and inequality. By incorporating diverse perspectives within data, process, system, and governance of the AI eco-system, AI systems can be more innovative, ethical, and effective in addressing the needs of diverse and especially under-represented users. This commitment to diversity and inclusion also ensures responsible and ethical AI development by fostering transparency, accountability, and trustworthiness, thereby safeguarding against unintended harmful consequences and promoting societal well-being.
Achieving responsible AI engineering—building adequate software engineering tools to support the responsible development of AI systems—requires a comprehensive understanding of human expectations and the utilization context of AI systems. This workshop aims to bring together researchers and practitioners not only in software engineering and AI but also ethicists, and experts from social sciences and regulatory bodies to build a community that will tackle the responsible/safe AI engineering challenges practitioners face in developing responsible and safe AI systems. Traditional software engineering methods are not sufficient to tackle the unique challenges posed by advanced AI technologies. This workshop will provide valuable insights into how software engineering can evolve to meet these challenges, focusing on aspects such as requirement engineering, architecture and design, verification and validation, and operational processes like DevOps and AgentOps. By bringing together experts from various fields, the workshop aims to foster interdisciplinary collaboration that will drive the advancement of responsible AI and AI safety engineering practices.
The primary objectives of this workshop are to:
- Share cutting-edge software/AI engineering methods, techniques, tools, and real-world case studies that can help ensure responsible AI and AI safety.
- Facilitate discussions among researchers and practitioners from diverse fields, including software engineering, AI, ethics, social sciences, and regulatory bodies, to address the responsible AI and AI safety engineering challenges.
- Promote the development of new and improved software/AI engineering approaches to ensure AI systems are trustworthy and trusted throughout their lifecycle.
The workshop will be scheduled at 14:00-15:30, and 16:00-17:30 on APRIL 12, 2026. We will features six paper presentations and two keynotes.
Accepted Papers
Call for Papers
Topics of interests include, but are not limited to:
- Requirement engineering for responsible AI and AI safety
- Responsible-AI-by-design and AI-safety-by-design software architecture
- Verification and validation for responsible AI and AI safety
- DevOps, MLOps, LLMOps, AgentOps for ensuring responsible AI and AI safety
- Development processes for responsible and safe AI systems
- Responsible AI and AI safety evaluation tools and techniques
- Reproducibility and traceability of AI systems
- Trust and trustworthiness of AI systems
- Responsible AI and AI safety governance
- Diversity and Inclusion in Responsible AI ecosystem; humans, data, processes/algorithms, systems, governance
- Operationalisation of laws (e.g., EU AI Act) and standards
- Evaluation of AI agent behaviors, reasoning reliability, consequences and risk analysis
- Human-AI interaction and collaboration reliability and user-centric evaluation of AI agents
- Human aspect of responsible AI and AI safety engineering
- Responsible AI and AI safety engineering for next-generation foundation model based AI systems (e.g., LLM-based agents)
- Case studies from certain high priority domains (e.g., financial services, science discovery, health, environment, energy)
The workshop will be highly interactive, including invited keynotes or talks, and paper presentations for different topics in the area of responsible AI engineering.
Three types of contributions will be considered:
- A research or experience full paper with 8 pages max;
- A short research or experience paper with 4 pages max;
- An extended abstract with 5 pages max. This is free of APC changes.
Submission Guidelines
All submissions must be in English and in PDF format. Papers must not exceed the page limits that are listed above. The Single-Anonymous Review process is employed by RAIE’26.
Detailed submission policies and guidelines for RAIE 26 are in line with the ICSE research track Submission Process (https://conf.researchr.org/track/icse-2026/icse-2026-research-track#submission-process).
Keynotes
Shaukat Ali
Head of Department, Research Professor, Chief Research Scientist, Simula Research Laboratory, Norway
Keynote Title
Responsible, Human-Centered Quantum Software Engineering
Abstract
Quantum computing offers transformative capabilities, from accelerating classical Artificial Intelligence (AI) to solving complex problems across multiple domains, including healthcare and climate. Yet its unique characteristics, such as superposition, entanglement, and inherent uncertainty, pose novel challenges for software engineering. If ethical and human-centered principles are ignored, quantum software and quantum AI can increase bias, make decisions harder to understand, and concentrate power among a few. This keynote presents a vision for responsible quantum software engineering, emphasizing interpretability, collaboration, fairness, and equitable access to quantum tools, infrastructure, and expertise. Drawing on emerging research, it highlights potential frameworks, tools, and practices that integrate ethics, governance, and human cognition into quantum software development from the outset. The keynote will argue that by embedding responsibility at the core, we can ensure that quantum software empowers everyone and shapes a more just and inclusive quantum future.
Biography
Shaukat Ali is a Chief Research Scientist, Research Professor, and Head of Department at Simula Research Laboratory in Oslo, Norway. He currently leads the Norwegian Quantum Software Center. His research focuses on developing advanced methods for engineering cyber-physical systems with artificial intelligence, digital twins, and quantum computing. He has led numerous national and European research projects in software testing, search-based software engineering, model-based systems engineering, and quantum software engineering. He is a co-founder of several key initiatives in the emerging field of quantum software, including the International Workshop on Quantum Software Engineering (held at ICSE), the International Conference on Quantum Software, and the QC+AI Workshop (held at AAAI). He also represents Simula in multiple national and international quantum computing research and industry networks.
Rashina Hoda
Professor of Software Engineering Monash University, Australia
Keynote Title
From Biased to Trustworthy AI: Responsible Agentic Software Engineering Beyond Code
Abstract
Building on her vision of Agentic Software Engineering Beyond Code, this keynote will reframe agentic AI as part of a socio-technical ecosystem encompassing ethical alignment, requirements, design, development, and operations. It will present empirical insights from research on societal and professional biases in large language models, illustrating how AI systems can unintentionally reinforce stereotypes and exclusion when values remain implicit. To address these challenges, the talk will share a CRAFT — comprehensive, responsible, adaptive, foundational, and translational — values and principles based foundation for responsible agentic software engineering. This foundation aims to guide the design, evaluation, and governance of agentic systems across the software lifecycle, balancing autonomy with ethical responsibility, regulatory compliance, and human collaboration at the micro and macro levels. The talk will conclude by outlining key research and practice challenges/areas for advancing responsible agentic software engineering.
Biography
Rashina Hoda is a Professor of Software Engineering at Monash University, Australia. Her research focuses on the human and socio-technical aspects of software engineering at the intersection of AI and digital health. She was named the 2025 Top Australian Researcher in Software Systems by The Australian. Her 2024 Springer book presents socio-technical grounded theory, a modern variation of traditional grounded theory that is being applied to produce high-quality empirical software engineering research. Rashina currently serves as an Associate Editor of IEEE Transactions on Software Engineering, PC Co-Chair for the SEIP track at ICSE 2026, and as a member of the ICSE Steering Committee. She is a TEDx speaker, recipient of the 2024 Women of Colour in STEM Guiding Star Mentorship Award, a 2021-22 Superstar of STEM, and a passionate champion of underrepresented girls and women in STEM. For more, visit: http://www.rashina.com/