Large language models (LLMs) like ChatGPT and Gemini have significantly advanced natural language processing, enabling various applications such as chatbots and automated content generation. However, these models can be exploited by malicious individuals who craft toxic prompts to elicit harmful or unethical responses. These individuals often employ jailbreaking techniques to bypass safety mechanisms, highlighting the need for robust toxic prompt detection methods. Existing detection techniques, both blackbox and whitebox, face challenges related to the diversity of toxic prompts, scalability, and computational efficiency. In response, we propose ToxicDetector, a lightweight greybox method designed to efficiently detect toxic prompts in LLMs. ToxicDetector leverages LLMs to create toxic concept prompts, uses embedding vectors to form feature vectors, and employs a Multi-Layer Perceptron (MLP) classifier for prompt classification. Our evaluation, conducted on various versions of the LLaMa models (LLaMa-3, LLaMa-2, and LLaMa-1), demonstrates that ToxicDetector achieves high accuracy (96.07%) and low false positive rates (3.29%), outperforming state-of-the-art methods. Additionally, ToxicDetector’s processing time of 0.084 seconds per prompt makes it highly suitable for real-time applications. ToxicDetector achieves high accuracy, efficiency, and scalability, making it a practical method for toxic prompt detection in LLMs.
Wed 30 OctDisplayed time zone: Pacific Time (US & Canada) change
10:30 - 12:00 | AIWareResearch Papers / Journal-first Papers at Camellia Chair(s): Vladimir Filkov University of California at Davis, USA | ||
10:30 15mTalk | Imperceptible Content Poisoning in LLM-Powered Applications Research Papers Quan Zhang Tsinghua University, Chijin Zhou Tsinghua University, Gwihwan Go Tsinghua University, Binqi Zeng Central South University, Heyuan Shi Central South University, Zichen Xu The Nanchang University, Yu Jiang Tsinghua University | ||
10:45 15mTalk | What Makes a High-Quality Training Dataset for Large Language Models: A Practitioners’ Perspective Research Papers Xiao Yu Huawei, Zexian Zhang Wuhan University of Technology, Feifei Niu University of Ottawa, Xing Hu Zhejiang University, Xin Xia Huawei, John Grundy Monash University Media Attached | ||
11:00 15mTalk | Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains Journal-first Papers Yu Cheng Jiangxi Normal University, Jieshan Chen CSIRO's Data61, Qing Huang School of Computer Information Engineering, Jiangxi Normal University, Zhenchang Xing CSIRO's Data61, Xiwei (Sherry) Xu Data61, CSIRO, Qinghua Lu Data61, CSIRO | ||
11:15 15mTalk | Efficient Detection of Toxic Prompts in Large Language Models Research Papers Yi Liu Nanyang Technological University, Huijia Sun ShanghaiTech University, Ling Shi Nanyang Technological University, Gelei Deng Nanyang Technological University, Yuqi Chen ShanghaiTech University, Junzhe Yu ShanghaiTech University, Yang Liu Nanyang Technological University | ||
11:30 15mTalk | Exploring ChatGPT App Ecosystem: Distribution, Deployment and Security Research Papers Chuan Yan University of Queensland, Mark Huasong Meng National University of Singapore, Liuhuo Wan University of Queensland, Tian Yang Ooi University of Queensland, Ruomai Ren University of Queensland, Guangdong Bai University of Queensland | ||
11:45 15mTalk | DataRecipe — How to Cook the Data for CodeLLM? Research Papers Kisub Kim Singapore Management University, Singapore, Jounghoon Kim Chinese University of Hong Kong, Hong Kong, Byeongjo Park Chungbuk National University, Korea, Dongsun Kim Korea University, Chun Yong Chong Monash University Malaysia, Yuan Wang Independent Researcher, Hong Kong, Tiezhu Sun University of Luxembourg, Xunzhu Tang University of Luxembourg, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg |