The adoption of Large Language Models (LLMs), including tools like ChatGPT and Bard, has transformed industries by providing advanced natural language understanding, human-like text generation, and effective problem-solving capabilities. However, despite their widespread use, LLMs have also raised critical security and privacy concerns. For example, adversarial attacks can use LLM outputs to spread misinformation or disclose sensitive information. A notable example is the ability of adversarial prompts to extract private data embedded in LLM training datasets. This highlights the urgent need to address data leakage risks in LLM-based applications. This study focuses on the critical problem of securing LLM-based applications by presenting a comprehensive framework to identify, address, and reduce these vulnerabilities systematically. The framework aims to combine established software testing techniques with AI-specific methods. It also emphasizes seamless integration into DevSecOps workflows in organizations to ensure scalable, secure, and reliable deployment and operation of LLM-based systems.
This program is tentative and subject to change.
Program Display Configuration
Mon 28 Apr
Displayed time zone: Eastern Time (US & Canada)change