ICSA 2025
Mon 31 March - Fri 4 April 2025 Odense, Denmark

With the growing popularity of Large Language Model (LLM) usage in generative AI applications comes a growing need to be able to verify, moderate, or “guardrail” LLM inputs and outputs. “Guardrailing” can be done with anything from simple regex detections, to more complicated techniques like using LLMs themselves to detect undesired content. Additionally, considerable effort has been going into creating and optimizing various LLM serving solutions. This paper describes our experience of using an adapter pattern with an LLM serving architecture to provide LLMs as guardrail models. The details on design trade-offs, such as performance or model accessibility, can aid in creating other LLM-based software architectures.

Tue 1 Apr

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

09:00 - 10:00
SAML Session 1: Welcome + PapersWorkshops at Workshop Room 1 (U82)
Chair(s): Justus Bogner Vrije Universiteit Amsterdam, Henry Muccini University of L'Aquila, Italy, Marie Platenius-Mohr ABB Corporate Research, Karthik Vaidhyanathan IIIT Hyderabad
09:00
20m
Paper
Agentic RAG with Human-in-the-RetrievalSAML 2025
Workshops
Xiwei (Sherry) Xu Data61, CSIRO, Dawen (David) Zhang CSIRO's Data61, Qing Liu University of Waterloo, Qinghua Lu Data61, CSIRO, Liming Zhu CSIRO’s Data61
09:20
20m
Paper
Serving LLMs as detectors in workflows with guardrailsSAML 2025
Workshops
09:40
20m
Paper
Towards practicable Machine Learning development using AI Engineering BlueprintsSAML 2025
Workshops