Tue 1 Apr 2025 09:20 - 09:40 at Workshop Room 1 (U82) - SAML Session 1: Welcome + Papers Chair(s): Justus Bogner, Henry Muccini, Marie Platenius-Mohr, Karthik Vaidhyanathan
With the growing popularity of Large Language Model (LLM) usage in generative AI applications comes a growing need to be able to verify, moderate, or “guardrail” LLM inputs and outputs. “Guardrailing” can be done with anything from simple regex detections, to more complicated techniques like using LLMs themselves to detect undesired content. Additionally, considerable effort has been going into creating and optimizing various LLM serving solutions. This paper describes our experience of using an adapter pattern with an LLM serving architecture to provide LLMs as guardrail models. The details on design trade-offs, such as performance or model accessibility, can aid in creating other LLM-based software architectures.
Tue 1 AprDisplayed time zone: Brussels, Copenhagen, Madrid, Paris change
Tue 1 Apr
Displayed time zone: Brussels, Copenhagen, Madrid, Paris change
09:00 - 10:00 | SAML Session 1: Welcome + PapersWorkshops at Workshop Room 1 (U82) Chair(s): Justus Bogner Vrije Universiteit Amsterdam, Henry Muccini University of L'Aquila, Italy, Marie Platenius-Mohr ABB Corporate Research, Karthik Vaidhyanathan IIIT Hyderabad | ||
09:00 20mPaper | Agentic RAG with Human-in-the-RetrievalSAML 2025 Workshops Xiwei (Sherry) Xu Data61, CSIRO, Dawen (David) Zhang CSIRO's Data61, Qing Liu University of Waterloo, Qinghua Lu Data61, CSIRO, Liming Zhu CSIRO’s Data61 | ||
09:20 20mPaper | Serving LLMs as detectors in workflows with guardrailsSAML 2025 Workshops | ||
09:40 20mPaper | Towards practicable Machine Learning development using AI Engineering BlueprintsSAML 2025 Workshops Nicolas Weeger , Annika Stiehl , Joakim von Kistowski University of Würzburg, Stefan Geißelsöder , Christian Uhl |