EvidenceBot: A Privacy-Preserving, Customizable RAG-Based Tool for Enhancing Large Language Model Interactions
Large Language Models (LLMs) have become pivotal in transforming industries by enabling advanced natural language processing tasks such as document analysis, content generation, and conversational assistance. Their ability to process and generate human-like text has unlocked unprecedented opportunities across domains such as healthcare, education, finance, and more. However, commercial LLM platforms face several limitations, including data privacy concerns, context size restrictions, lack of parameter configurability, and limited evaluation capabilities. These shortcomings hinder their effectiveness, particularly in scenarios involving sensitive information, large-scale document analysis, or the need for customized output. This underscores the need for a tool that combines the power of LLMs with enhanced privacy, flexibility, and usability.
To address these challenges, we present EvidenceBot, a local, Retrieval-Augmented Generation (RAG)-based solution designed to overcome the limitations of commercial LLM platforms. EvidenceBot enables secure and efficient processing of large document sets through its privacy-preserving RAG pipeline, which extracts and appends only the most relevant text chunks as context for queries. The tool allows users to experiment with hyperparameter configurations, optimizing model responses for specific tasks, and includes an evaluation module to assess LLM performance against ground truths using semantic and similarity-based metrics. By offering enhanced privacy, customization, and evaluation capabilities, EvidenceBot bridges critical gaps in the LLM ecosystem, providing a versatile resource for individuals and organizations seeking to leverage LLMs effectively.
Tue 24 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:30 - 12:30 | SE for LLMJournal First / Industry Papers / Demonstrations / Research Papers / Ideas, Visions and Reflections at Cosmos 3C Chair(s): Hongyu Zhang Chongqing University | ||
10:30 10mTalk | Enhancing Code LLM Training with Programmer Attention Ideas, Visions and Reflections Yifan Zhang Vanderbilt University, Chen Huang Sichuan University, Zachary Karas Vanderbilt University, Thuy Dung Nguyen Vanderbilt University, Kevin Leach Vanderbilt University, Yu Huang Vanderbilt University | ||
10:40 20mTalk | Risk Assessment Framework for Code LLMs via Leveraging Internal States Industry Papers Yuheng Huang The University of Tokyo, Lei Ma The University of Tokyo & University of Alberta, Keizaburo Nishikino Fujitsu Limited, Takumi Akazaki Fujitsu Limited | ||
11:00 20mTalk | An Empirical Study of Issues in Large Language Model Training Systems Industry Papers Yanjie Gao Microsoft Research, Ruiming Lu Shanghai Jiao Tong University, Haoxiang Lin Microsoft Research, Yueguo Chen Renmin University of China DOI | ||
11:20 20mTalk | Look Before You Leap: An Exploratory Study of Uncertainty Analysis for Large Language Models Journal First Yuheng Huang The University of Tokyo, Norman Song , Zhijie Wang University of Alberta, Shengming Zhao University of Alberta, Huaming Chen The University of Sydney, Felix Juefei-Xu New York University, Lei Ma The University of Tokyo & University of Alberta Link to publication DOI Pre-print | ||
11:40 10mTalk | EvidenceBot: A Privacy-Preserving, Customizable RAG-Based Tool for Enhancing Large Language Model Interactions Demonstrations Nafiz Imtiaz Khan Department of Computer Science, University of California, Davis, Vladimir Filkov University of California at Davis, USA | ||
11:50 20mTalk | OpsEval: A Comprehensive Benchmark Suite for Evaluating Large Language Models’ Capability in IT Operations Domain Industry Papers Yuhe Liu Tsinghua University, Changhua Pei Computer Network Information Center at Chinese Academy of Sciences, Longlong Xu Tsinghua University, Bohan Chen Tsinghua University, Mingze Sun Tsinghua University, Zhirui Zhang Beijing University of Posts and Telecommunications, Yongqian Sun Nankai University, Shenglin Zhang Nankai University, Kun Wang Zhejiang University, Haiming Zhang Chinese Academy of Sciences, Jianhui Li Computer Network Information Center at Chinese Academy of Sciences, Gaogang Xie Computer Network Information Center at Chinese Academy of Sciences, Xidao Wen BizSeer, Xiaohui Nie Computer Network Information Center at Chinese Academy of Sciences, Minghua Ma Microsoft, Dan Pei Tsinghua University | ||
12:10 20mTalk | Hallucination Detection in Large Language Models with Metamorphic Relations Research Papers Borui Yang Beijing University of Posts ad Telecommunications, Md Afif Al Mamun University of Calgary, Jie M. Zhang King's College London, Gias Uddin York University, Canada DOI |
Cosmos 3C is the third room in the Cosmos 3 wing.
When facing the main Cosmos Hall, access to the Cosmos 3 wing is on the left, close to the stairs. The area is accessed through a large door with the number “3”, which will stay open during the event.