ICSE 2025
Sat 26 April - Sun 4 May 2025 Ottawa, Ontario, Canada

Theme & Goals

The rapid advancements in AI, particularly the release of large language models (LLMs) and their applications, have attracted significant global interest and raised substantial concerns on responsible AI and AI safety. While LLMs are impressive examples of AI models, it is the compound AI systems, which integrate these models with other key components for function and quality/risk control, that are ultimately deployed and have real-world impact. These AI systems, especially autonomous LLM agents and those involving multi-agent interacting, require careful system-level engineering to ensure responsible AI and AI safety.

In recent years, numerous regulations, principles, and guidelines for responsible AI and AI safety have been issued by governments, research organizations, and enterprises. However, they are typically very high-level and do not provide concrete guidance for technologists on how to implement responsible and safe AI. Developing responsible AI systems goes beyond fixing traditional software code “bugs” and providing theoretical guarantees for algorithms. New and improved software/AI engineering approaches are required to ensure that AI systems are trustworthy and safe throughout their entire lifecycle and trusted by those who use and rely on them.

Diversity and inclusion principles in AI are crucial for ensuring that the technology fairly represents and benefits all segments of society, preventing biases that can lead to discrimination and inequality. By incorporating diverse perspectives within data, process, system, and governance of the AI eco-system, AI systems can be more innovative, ethical, and effective in addressing the needs of diverse and especially under-represented users. This commitment to diversity and inclusion also ensures responsible and ethical AI development by fostering transparency, accountability, and trustworthiness, thereby safeguarding against unintended harmful consequences and promoting societal well-being.

Achieving responsible AI engineering—building adequate software engineering tools to support the responsible development of AI systems—requires a comprehensive understanding of human expectations and the utilization context of AI systems. This workshop aims to bring together researchers and practitioners not only in software engineering and AI but also ethicists, and experts from social sciences and regulatory bodies to build a community that will tackle the responsible/safe AI engineering challenges practitioners face in developing responsible and safe AI systems. Traditional software engineering methods are not sufficient to tackle the unique challenges posed by advanced AI technologies. This workshop will provide valuable insights into how software engineering can evolve to meet these challenges, focusing on aspects such as requirement engineering, architecture and design, verification and validation, and operational processes like DevOps and AgentOps. By bringing together experts from various fields, the workshop aims to foster interdisciplinary collaboration that will drive the advancement of responsible AI and AI safety engineering practices.

The primary objectives of this workshop are to:

  1. Share cutting-edge software/AI engineering methods, techniques, tools, and real-world case studies that can help ensure responsible AI and AI safety.
  2. Facilitate discussions among researchers and practitioners from diverse fields, including software engineering, AI, ethics, social sciences, and regulatory bodies, to address the responsible AI and AI safety engineering challenges.
  3. Promote the development of new and improved software/AI engineering approaches to ensure AI systems are trustworthy and trusted throughout their lifecycle.
Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 29 Apr

Displayed time zone: Eastern Time (US & Canada) change

07:00 - 19:00
09:00 - 10:30
Session 1RAIE at 207
Chair(s): Qinghua Lu Data61, CSIRO
09:00
10m
Day opening
Opening Remarks
RAIE
Qinghua Lu Data61, CSIRO
09:10
50m
Keynote
Keynote 1 by Rick Kazman
RAIE
K: Rick Kazman University of Hawai‘i at Mānoa
10:00
15m
Talk
Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis
RAIE
Jonathan Brokman Fujitsu Research, Omer Hofman Fujitsu Research, Oren Rachmil Fujitsu Research, Inderjeet Singh Fujitsu Research, P: Vikas Pahuja Fujitsu Research, Aishvariya Priya Rathina Sabapathy Fujitsu Research, Amit Giloni Fujitsu Research, Roman Vainshtein Fujitsu Research, Hisashi Kojima Fujitsu Research
Pre-print
10:15
12m
Talk
Mitigating Values Debt in Generative AI: Responsible Engineering with Graph RAG
RAIE
P: Waqar Hussain Data61, CSIRO
10:30 - 11:00
10:30
30m
Break
Tuesday Morning Break
Catering

11:00 - 12:30
Session 2RAIE at 207
Chair(s): Foutse Khomh Polytechnique Montréal
11:00
15m
Talk
Using Drift Planning to Improve Safety of Visual Navigation in Unmanned Aerial Vehicles
RAIE
Jeffrey Hansen Carnegie Mellon Software Engineering Institute, Sebastian Echeverria Carnegie Mellon Software Engineering Institute, Lena Pons Carnegie Mellon Software Engineering Institute, Lihan Zhan Carnegie Mellon Software Engineering Institute, Gabriel A. Moreno Carnegie Mellon University Software Engineering Institute, P: Grace Lewis Carnegie Mellon Software Engineering Institute
11:15
15m
Talk
LLM-AQuA-DiVeR: LLM-Assisted Quality Assurance Through Dialogues on Verifiable Specification with Requirement Owners
RAIE
P: Shohei Mitani Georgetown University, Salonee Moona Triple Point Security, Shinichiro Matsuo Georgetown University, Eric Burger Virginia Tech
11:30
12m
Talk
Towards Ensuring Responsible AI for Medical Device Certification
RAIE
P: Giulio Mallardi University of Bari, Luigi Quaranta University of Bari, Italy, Fabio Calefato University of Bari, Filippo Lanubile University of Bari
11:42
12m
Talk
Navigating the landscape of AI test methods using taxonomy-based selection
RAIE
Maximilian Pintz Fraunhofer Institute for Intelligent Analysis and Information Systems, University of Bonn, Anna Schmitz Fraunhofer Institute for Intelligent Analysis and Information Systems, Rebekka Görge Fraunhofer Institute for Intelligent Analysis and Information Systems, P: Sebastian Schmidt Fraunhofer Institute for Intelligent Analysis and Information Systems, Daniel Becker , Maram Akila Fraunhofer Institute for Intelligent Analysis and Information Systems, Lamarr Institute, Michael Mock Fraunhofer Institute for Intelligent Analysis and Information Systems
11:54
12m
Talk
Responsible AI in the Software Industry: A Practitioner-Centered Perspective
RAIE
P: Matheus de Morais Leça University of Calgary, Mariana Pinheiro Bento University of Calgary, Ronnie de Souza Santos University of Calgary
Pre-print
12:06
12m
Talk
The Privacy Pillar - A Conceptual Framework for Foundation Model-based Systems
RAIE
Tingting Bi The University of Melbourne, Guangsheng Yu University of Technology Sydney, Qin Wang CSIRO Data61
12:30 - 14:00
12:30
90m
Lunch
Tuesday Lunch
Catering

14:00 - 15:30
Session 3RAIE at 207
Chair(s): Maximilian Poretschkin Fraunhofer IAIS & University of Bonn
14:00
50m
Keynote
Keynote 2 by David Lo
RAIE
K: David Lo Singapore Management University
14:50
15m
Talk
Security of AI Agents
RAIE
P: Yifeng He University of California, Davis, Ethan Wang University of California at Davis, Yuyang Rong University of California, Davis, Zifei Cheng University of California, Davis, Hao Chen University of California at Davis
15:05
15m
Talk
Raising AI Ethics Awareness: Insights from a Quiz-Based Workshop with Software Practitioners – An Experience Report
RAIE
Aastha Pant Monash University, P: Rashina Hoda Monash University, Paul McIntosh RMIT University
15:30 - 16:00
15:30
30m
Break
Tuesday Afternoon Break
Catering

16:00 - 17:30
Session 4RAIE at 207
Chair(s): Muneera Bano CSIRO's Data61
16:00
15m
Talk
Towards Responsible AI in Education: Hybrid Recommendation System for K-12 Students Case Study
RAIE
Nazarii Drushchak SoftServe Inc., P: Vladyslava Tyshchenko SoftServe Inc., Nataliya Polyakovska SoftServe Inc.
Pre-print
16:15
12m
Short-paper
Compliance Made Practical: Translating the EU AI Act into Implementable Actions
RAIE
P: Niklas Bunzel Fraunhofer Institute for Secure Information Technology
16:27
15m
Talk
Leveraging Existing Road-Vehicle Standards to address EU AI Act Compliance
RAIE
P: Shanza Ali Zafar Fraunhofer IKS, Jessica Kelly Fraunhofer IKS, Lena Heidemann Fraunhofer IKS, Núria Mata Fraunhofer IKS
16:42
3m
Break
Mini-break
RAIE

16:45
35m
Panel
Panel Discussion - Diversity and Inclusion in AI (Chaired by Muneera Bano)
RAIE
P: Muneera Bano CSIRO's Data61, P: Rashina Hoda Monash University, P: Daniel Amyot University of Ottawa, P: Ronnie de Souza Santos University of Calgary
17:20
10m
Day closing
Closing Remarks
RAIE
Qinghua Lu Data61, CSIRO
19:00 - 22:00
Quiet Room Tuesday EveningSocial, Networking and Special Rooms at 202

Accepted Papers

Title
Compliance Made Practical: Translating the EU AI Act into Implementable Actions
RAIE
Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis
RAIE
Pre-print
Leveraging Existing Road-Vehicle Standards to address EU AI Act Compliance
RAIE
LLM-AQuA-DiVeR: LLM-Assisted Quality Assurance Through Dialogues on Verifiable Specification with Requirement Owners
RAIE
Mitigating Values Debt in Generative AI: Responsible Engineering with Graph RAG
RAIE
Navigating the landscape of AI test methods using taxonomy-based selection
RAIE
Raising AI Ethics Awareness: Insights from a Quiz-Based Workshop with Software Practitioners – An Experience Report
RAIE
Responsible AI in the Software Industry: A Practitioner-Centered Perspective
RAIE
Pre-print
Security of AI Agents
RAIE
The Privacy Pillar - A Conceptual Framework for Foundation Model-based Systems
RAIE
Towards Ensuring Responsible AI for Medical Device Certification
RAIE
Towards Responsible AI in Education: Hybrid Recommendation System for K-12 Students Case Study
RAIE
Pre-print
Using Drift Planning to Improve Safety of Visual Navigation in Unmanned Aerial Vehicles
RAIE

Call for Papers

Topics of interests include, but are not limited to:

  • Requirement engineering for responsible AI and AI safety
  • Responsible-AI-by-design and AI-safety-by-design software architecture
  • Verification and validation for responsible AI and AI safety
  • DevOps, MLOps, LLMOps, AgentOps for ensuring responsible AI and AI safety
  • Development processes for responsible and safe AI systems
  • Responsible AI and AI safety evaluation tools and techniques
  • Reproducibility and traceability of AI systems
  • Trust and trustworthiness of AI systems
  • Responsible AI and AI safety governance
  • Diversity and Inclusion in the Responsible AI ecosystem; humans, data, processes/algorithms, systems, governance
  • Operationalization of laws (e.g., EU AI Act) and standards
  • Human aspect of responsible AI and AI safety engineering
  • Responsible AI and AI safety engineering for next-generation foundation model-based AI systems (e.g., LLM-based agents)
  • Case studies from certain high-priority domains (e.g., financial services, scientific discovery, health, environment, energy)

The workshop will be highly interactive, including invited keynotes/talks, paper presentations for different topics in the area of responsible AI engineering.

Two types of contributions will be considered:

  • A research or experience full paper with 8 pages max, including references. Papers describing the challenges, starting results, vision papers, or the experience papers from or in cooperation with the practitioners are encouraged.
  • A short research or experience paper with 4 pages max, including references. The same topics as for long papers.

Submission Guidelines

Submissions must conform to the IEEE conference proceedings template, specified in the IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf options).

  • The RAIE 2025 will employ a single-anonymous review process, so authors should include their names and affiliations in the submission.
  • All submissions must conform to the specified page length above. All submissions must be in PDF.
  • Submissions must strictly conform to the IEEE conference proceedings formatting instructions specified above. Alterations of spacing, font size, and other changes that deviate from the instructions may result in desk rejection without further review.
  • For other submission policies such as research involving human participants/subjects and ORCID, please refer to ICSE 2025 Technical Track Submission Process.

Rick Kazman

Rick Kazman

What Architects Don't Know About Building ML-intensive Systems

Abstract

Machine learning components are being deployed in software systems across every business sector, and their importance is continually growing. However, the engineering practices for building these systems remain poorly understood compared to those for conventional software systems. In this talk I will discuss our efforts to create practical guidance to support architects in designing and implementing machine learning-intensive systems, and to identify areas where there are gaps in understanding and achievement. Building on our prior research, we developed a checklist of quality concerns for architects of machine learning-intensive systems. This checklist was iteratively refined through expert interviews and subsequently validated in a workshop with experienced architects.

Bio

Rick Kazman is the Danny and Elsa Lui Distinguished Professor of Information Technology Management at the University of Hawaii. His research interests are software architecture, design and analysis tools, and technical debt. Kazman has been involved in the creation of several influential methods and tools for architecture analysis, including the Architecture Tradeoff Analysis Method and the Titan and DV8 tools. He is the author of over 250 publications, co-author of three patents and nine books, including Software Architecture in Practice, Technical Debt: How to Find It and Fix It, Designing Software Architectures: A Practical Approach, and Ultra-Large-Scale Systems: The Software Challenge of the Future. His methods and tools have been adopted by many Fortune 1000 companies and his work has been cited over 30,000 times, according to Google Scholar. He is currently a member of the IEEE Computer Society’s Board of Governors.

Kazman received a B.A. (English/Music) and M.Math (Computer Science) from the University of Waterloo, an M.A. (English) from York University, and a Ph.D. (Computational Linguistics) from Carnegie Mellon University. How he ever became a research in software engineering is anybody’s guess. When not doing architecture things, Kazman may be found cycling, singing acapella music, gardening, or playing the piano.


David Lo

David Lo

Engineering Safer AI Systems: From Code to Control to Communication

Abstract

Ensuring the safety of AI systems is critical as they increasingly influence the code we generate, the control we exert over physical systems, and the communication between humans and machines. This keynote presents advances in engineering AI systems that are safer by design, highlighting testing and healing techniques across three domains: code, control, and communication. The first part focuses on code, discussing methods for uncovering vulnerabilities in AI code generators and mitigating hidden security threats through real-time hardening. The second part addresses control, presenting approaches to systematically uncover safety violations in AI-enabled control systems and to synthesize runtime shields that enforce safe behaviors without retraining. The final part turns to communication, showcasing techniques to detect demographic bias in sentiment analysis models and to heal unfair predictions dynamically at runtime. Although the specific challenges and methods differ across these domains, a common goal unites them: enabling AI systems to behave safely and responsibly, even when their inner workings are inaccessible. By combining lightweight testing, runtime verification, and on-the-fly mitigation, the approaches discussed pave the way toward AI systems that are not only powerful, but also trustworthy and resilient.

Bio

David Lo is the OUB Chair Professor of Computer Science and the founding Director of the Center for Research in Intelligent Software Engineering (RISE) at Singapore Management University. Championing the field of AI for Software Engineering (AI4SE) since the mid-2000s, he has demonstrated how AI — encompassing data mining, machine learning, information retrieval, natural language processing, and search-based algorithms — can transform software engineering data into actionable insights and automation. Through empirical studies, he has identified practitioners' pain points, characterized the limitations of AI4SE solutions, and explored practitioners' acceptance thresholds for AI-powered tools. His contributions have led to over 20 awards, including two Test-of-Time awards and eleven ACM SIGSOFT/IEEE TCSE Distinguished Paper awards, and his work has garnered close to 40,000 citations. An ACM Fellow, IEEE Fellow, ASE Fellow, and National Research Foundation Investigator (Senior Fellow), Lo has also served as the GC of MSR'22 and ASE'16, and as a PC Co-Chair for ASE'20, FSE'24, and ICSE'25. For more information, please visit: http://www.mysmu.edu/faculty/davidlo/

Date: Tuesday, 29 April 2025
Time: 16:45 – 17:20
Event: RAIE Workshop, ICSE 2025, Ottawa


Panel Overview

AI is no longer just a tool; it’s shaping everything from scientific research to software engineering and beyond. As its influence grows, so does the urgent need to build justice, fairness, and equity into the core of these systems.

In this panel, we’ll tackle the rising backlash against diversity and inclusion and talk about how we can keep moving forward without losing the progress we’ve gained and championed for. This isn’t just about being ethical; it’s about building AI that truly serves everyone.

Join us for an honest, powerful conversation on why inclusion in AI isn’t optional; it’s essential for the future we all share.


Meet Our Panellists

We are bringing together three distinguished researchers whose combined expertise spans software engineering, human-centred AI, socio-technical research, ethical AI, and diversity and inclusion.


Moderator

  • Muneera Bano — Principal Research Scientist, CSIRO, Australia
Questions? Use the RAIE contact form.