ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Wed 30 Oct 2024 10:45 - 11:00 at Camellia - AIWare Chair(s): Vladimir Filkov

Large Language Models (LLMs) have demonstrated remarkable performance in various application domains, largely due to their self-supervised pre-training on extensive high-quality text datasets. However, despite the importance of constructing such datasets, many leading LLMs lack documentation of their dataset construction and training procedures, leaving LLM practitioners with a limited understanding of what makes a high-quality training dataset for LLMs. To fill this gap, we initially identified 18 characteristics of high-quality LLM training datasets, as well as 10 potential data pre-processing methods and 6 data quality assessment methods, through detailed interviews with 13 experienced LLM professionals. We then surveyed 219 LLM practitioners from 23 countries across 5 continents. We asked our survey respondents to rate the importance of these characteristics, provide a rationale for their ratings, specify the key data pre-processing and data quality assessment methods they used, and highlight the challenges encountered during these processes. From our analysis, we identified 13 crucial characteristics of high-quality LLM datasets that receive a high rating, accompanied by key rationale provided by respondents. We also identified some widely-used data pre-processing and data quality assessment methods, along with 7 challenges encountered during these processes. Based on our findings, we discuss the implications for researchers and practitioners aiming to construct high-quality training datasets for optimizing LLMs.

Wed 30 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
AIWareResearch Papers / Journal-first Papers at Camellia
Chair(s): Vladimir Filkov University of California at Davis, USA
10:30
15m
Talk
Imperceptible Content Poisoning in LLM-Powered Applications
Research Papers
Quan Zhang Tsinghua University, Chijin Zhou Tsinghua University, Gwihwan Go Tsinghua University, Binqi Zeng Central South University, Heyuan Shi Central South University, Zichen Xu The Nanchang University, Yu Jiang Tsinghua University
10:45
15m
Talk
What Makes a High-Quality Training Dataset for Large Language Models: A Practitioners’ Perspective
Research Papers
Xiao Yu Huawei, Zexian Zhang Wuhan University of Technology, Feifei Niu University of Ottawa, Xing Hu Zhejiang University, Xin Xia Huawei, John Grundy Monash University
Media Attached
11:00
15m
Talk
Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains
Journal-first Papers
Yu Cheng Jiangxi Normal University, Jieshan Chen CSIRO's Data61, Qing Huang School of Computer Information Engineering, Jiangxi Normal University, Zhenchang Xing CSIRO's Data61, Xiwei (Sherry) Xu Data61, CSIRO, Qinghua Lu Data61, CSIRO
11:15
15m
Talk
Efficient Detection of Toxic Prompts in Large Language Models
Research Papers
Yi Liu Nanyang Technological University, Huijia Sun ShanghaiTech University, Ling Shi Nanyang Technological University, Gelei Deng Nanyang Technological University, Yuqi Chen ShanghaiTech University, Junzhe Yu ShanghaiTech University, Yang Liu Nanyang Technological University
11:30
15m
Talk
Exploring ChatGPT App Ecosystem: Distribution, Deployment and SecurityACM SigSoft Distinguished Paper Award
Research Papers
Chuan Yan University of Queensland, Mark Huasong Meng National University of Singapore, Liuhuo Wan University of Queensland, Tian Yang Ooi University of Queensland, Ruomai Ren University of Queensland, Guangdong Bai University of Queensland
11:45
15m
Talk
DataRecipe — How to Cook the Data for CodeLLM?
Research Papers
Kisub Kim Singapore Management University, Singapore, Jounghoon Kim Chinese University of Hong Kong, Hong Kong, Byeongjo Park Chungbuk National University, Korea, Dongsun Kim Korea University, Chun Yong Chong Monash University Malaysia, Yuan Wang Independent Researcher, Hong Kong, Tiezhu Sun University of Luxembourg, Xunzhu Tang University of Luxembourg, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg