Systematic Testing of Security-Related Defects in LLM-Based Applications
Mon 28 Apr 2025 16:20 - 16:40 at 212 - Doctoral Symposium 3 (Detailed Presentation)
The adoption of Large Language Models (LLMs), including tools like ChatGPT and Bard, has transformed industries by providing advanced natural language understanding, human-like text generation, and effective problem-solving capabilities. However, despite their widespread use, LLMs have also raised critical security and privacy concerns. For example, adversarial attacks can use LLM outputs to spread misinformation or disclose sensitive information. A notable example is the ability of adversarial prompts to extract private data embedded in LLM training datasets. This highlights the urgent need to address data leakage risks in LLM-based applications. This study focuses on the critical problem of securing LLM-based applications by presenting a comprehensive framework to identify, address, and reduce these vulnerabilities systematically. The framework aims to combine established software testing techniques with AI-specific methods. It also emphasizes seamless integration into DevSecOps workflows in organizations to ensure scalable, secure, and reliable deployment and operation of LLM-based systems.
Sun 27 AprDisplayed time zone: Eastern Time (US & Canada) change
Mon 28 AprDisplayed time zone: Eastern Time (US & Canada) change
16:00 - 17:30 | |||
16:00 20mTalk | Identification and Optimization of Redundant Code Using Large Language Models Doctoral Symposium Shamse Tasnim Cynthia University of Saskatchewan | ||
16:20 20mTalk | Systematic Testing of Security-Related Defects in LLM-Based Applications Doctoral Symposium Hasan Kaplan Jheronimus Academy of Data Science, Tilburg University | ||
16:40 20mTalk | Model-Based Verification for AI-Enabled Cyber-Physical Systems through Guided Falsification of Temporal Logic Properties Doctoral Symposium Hadiza Yusuf University of Michigan - Dearborn |