ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Wed 30 Oct 2024 11:00 - 11:15 at Camellia - AIWare Chair(s): Vladimir Filkov

The emergence of foundation models, such as large language models (LLMs) GPT-4 and text-to-image models DALL-E, has opened up numerous possibilities across various domains. People can now use natural language (i.e. prompts) to communicate with AI to perform tasks. While people can use foundation models through chatbots (e.g., ChatGPT), chat, regardless of the capabilities of the underlying models, is not a production tool for building reusable AI services. APIs like LangChain allow for LLM-based application development but require substantial programming knowledge, thus posing a barrier. To mitigate this, we systematically review, summarise, refine and extend the concept of AI chain by incorporating the best principles and practices that have been accumulated in software engineering for decades into AI chain engineering, to systematize AI chain engineering methodology. We also develop a no-code integrated development environment, Prompt Sapper, which embodies these AI chain engineering principles and patterns naturally in the process of building AI chains, thereby improving the performance and quality of AI chains. With Prompt Sapper, AI chain engineers can compose prompt-based AI services on top of foundation models through chat-based requirement analysis and visual programming. Our user study evaluated and demonstrated the efficiency and correctness of Prompt Sapper.

Link to the journal paper:https://dl.acm.org/doi/10.1145/3638247

Wed 30 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
AIWareResearch Papers / Journal-first Papers at Camellia
Chair(s): Vladimir Filkov University of California at Davis, USA
10:30
15m
Talk
Imperceptible Content Poisoning in LLM-Powered Applications
Research Papers
Quan Zhang Tsinghua University, Chijin Zhou Tsinghua University, Gwihwan Go Tsinghua University, Binqi Zeng Central South University, Heyuan Shi Central South University, Zichen Xu The Nanchang University, Yu Jiang Tsinghua University
10:45
15m
Talk
What Makes a High-Quality Training Dataset for Large Language Models: A Practitioners’ Perspective
Research Papers
Xiao Yu Huawei, Zexian Zhang Wuhan University of Technology, Feifei Niu University of Ottawa, Xing Hu Zhejiang University, Xin Xia Huawei, John Grundy Monash University
Media Attached
11:00
15m
Talk
Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains
Journal-first Papers
Yu Cheng Jiangxi Normal University, Jieshan Chen CSIRO's Data61, Qing Huang School of Computer Information Engineering, Jiangxi Normal University, Zhenchang Xing CSIRO's Data61, Xiwei (Sherry) Xu Data61, CSIRO, Qinghua Lu Data61, CSIRO
11:15
15m
Talk
Efficient Detection of Toxic Prompts in Large Language Models
Research Papers
Yi Liu Nanyang Technological University, Huijia Sun ShanghaiTech University, Ling Shi Nanyang Technological University, Gelei Deng Nanyang Technological University, Yuqi Chen ShanghaiTech University, Junzhe Yu ShanghaiTech University, Yang Liu Nanyang Technological University
11:30
15m
Talk
Exploring ChatGPT App Ecosystem: Distribution, Deployment and SecurityACM SigSoft Distinguished Paper Award
Research Papers
Chuan Yan University of Queensland, Mark Huasong Meng National University of Singapore, Liuhuo Wan University of Queensland, Tian Yang Ooi University of Queensland, Ruomai Ren University of Queensland, Guangdong Bai University of Queensland
11:45
15m
Talk
DataRecipe — How to Cook the Data for CodeLLM?
Research Papers
Kisub Kim Singapore Management University, Singapore, Jounghoon Kim Chinese University of Hong Kong, Hong Kong, Byeongjo Park Chungbuk National University, Korea, Dongsun Kim Korea University, Chun Yong Chong Monash University Malaysia, Yuan Wang Independent Researcher, Hong Kong, Tiezhu Sun University of Luxembourg, Xunzhu Tang University of Luxembourg, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg