CAIN 2025
Sun 27 - Mon 28 April 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025

This program is tentative and subject to change.

Mon 28 Apr 2025 12:05 - 12:13 at 212 - Session 6: Doctorial Symposium

The adoption of Large Language Models (LLMs), including tools like ChatGPT and Bard, has transformed industries by providing advanced natural language understanding, human-like text generation, and effective problem-solving capabilities. However, despite their widespread use, LLMs have also raised critical security and privacy concerns. For example, adversarial attacks can use LLM outputs to spread misinformation or disclose sensitive information. A notable example is the ability of adversarial prompts to extract private data embedded in LLM training datasets. This highlights the urgent need to address data leakage risks in LLM-based applications. This study focuses on the critical problem of securing LLM-based applications by presenting a comprehensive framework to identify, address, and reduce these vulnerabilities systematically. The framework aims to combine established software testing techniques with AI-specific methods. It also emphasizes seamless integration into DevSecOps workflows in organizations to ensure scalable, secure, and reliable deployment and operation of LLM-based systems.

This program is tentative and subject to change.

Mon 28 Apr

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
Session 6: Doctorial Symposium Doctoral Symposium at 212
11:00
8m
Talk
A Metrics-Oriented Architectural Model to Characterize Complexity on Machine Learning-Enabled Systems
Doctoral Symposium
Renato Cordeiro Ferreira University of São Paulo
11:08
8m
Talk
Architectures to Embrace Change for Trustworthy AI-based Systems
Doctoral Symposium
Merel Veracx Fontys University of Applied Sciences
11:16
8m
Talk
Assessing and Enhancing the Robustness of LLM-based Multi-Agent Systems Through Chaos Engineering.
Doctoral Symposium
Joshua Segun Owotogbe JADS/Tilburg University
11:24
8m
Talk
CoCo Challenges in ML Engineering Teams: How to Collaboratively Build ML-Enabled Systems
Doctoral Symposium
Aidin Azamnouri Technical University of Munich
11:32
8m
Talk
Designing ML-Enabled Software Systems with ML Model Composition: A Green AI Perspective
Doctoral Symposium
Rumbidzai Chitakunye Vrije Universiteit Amsterdam
11:40
8m
Talk
Identification and Optimization of Redundant Code Using Large Language Models
Doctoral Symposium
Shamse Tasnim Cynthia University of Saskatchewan
11:49
8m
Talk
Model-Based Verification for AI-Enabled Cyber-Physical Systems through Guided Falsification of Temporal Logic Properties
Doctoral Symposium
Hadiza Yusuf University of Michigan - Dearborn
11:57
8m
Talk
Optimizing Data Analytics Workflows through User-driven Experimentation: Progress and Updates
Doctoral Symposium
Keerthiga Rajenthiram Vrije Universiteit Amsterdam
12:05
8m
Talk
Systematic Testing of Security-Related Defects in LLM-Based Applications
Doctoral Symposium
Hasan Kaplan Jheronimus Academy of Data Science, Tilburg University
12:13
8m
Talk
Towards an Adoption Framework to Foster Trust in AI-Assisted Software Engineering
Doctoral Symposium
Marvin Muñoz Barón University of Stuttgart
12:21
8m
Talk
Towards a Privacy-by-Design Framework for ML-Enabled Systems
Doctoral Symposium
Yorick Sens Ruhr University Bochum
:
:
:
: