ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States

This program is tentative and subject to change.

Tue 29 Oct 2024 13:45 - 14:00 at Camellia - LLM for SE 1

Artificial Intelligence Generated Content (AIGC) has garnered considerable attention for its impressive performance, with LLMs, like ChatGPT, emerging as a leading AIGC model that produces high-quality responses across various applications, including software development and maintenance. Despite its potential, the misuse of LLMs, especially in security and safety-critical domains, such as academic integrity and answering questions on Stack Overflow, poses significant concerns. Numerous AIGC detectors have been developed and evaluated on natural language data. However, their performance on code-related content generated by LLMs remains unexplored.

To fill this gap, in this paper, we present an empirical study evaluating existing AIGC detectors in the software domain. We select three state-of-the-art LLMs, i.e., GPT-3.5, WizardCoder and CodeLlama, for machine-content generation. We further created a comprehensive dataset including 2.23M samples comprising code-related content for each model, encompassing popular software activities like Q&A (150K), code summarization (1M), and code generation (1.1M). We evaluated thirteen AIGC detectors, comprising six commercial and seven open-source solutions, assessing their performance on this dataset. Our results indicate that AIGC detectors perform less on code-related data than natural language data. Fine-tuning can enhance detector performance, especially for content within the same domain; but generalization remains a challenge.

This program is tentative and subject to change.

Tue 29 Oct

Displayed time zone: Pacific Time (US & Canada) change

13:30 - 15:00
13:30
15m
Talk
How Effective Do Code Language Models Understand Poor-Readability Code?
Research Papers
Chao Hu School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Yitian Chai School of Software, Shanghai Jiao Tong University, Hao Zhou Pattern, Recognition Center, WeChat, Tencent, Fandong Meng WeChat AI, Tencent, Jie Zhou Tencent, Xiaodong Gu Shanghai Jiao Tong University
13:45
15m
Talk
An Empirical Study to Evaluate AIGC Detectors on Code Content
Research Papers
wang.jian , Shangqing Liu Nanyang Technological University, Xiaofei Xie Singapore Management University, Yi Li Nanyang Technological University
Pre-print
14:00
15m
Talk
Distilled GPT for source code summarization
Journal-first Papers
Chia-Yi Su University of Notre Dame, Collin McMillan University of Notre Dame
14:15
15m
Talk
Leveraging Large Language Model to Assist Detecting Rust Code Comment Inconsistency
Research Papers
Zhang Yichi , Zixi Liu Nanjing University, Yang Feng Nanjing University, Baowen Xu Nanjing University
14:30
10m
Talk
LLM-Based Java Concurrent Program to ArkTS Converter
Tool Demonstrations
Runlin Liu Beihang University, Yuhang Lin Zhejiang University, Yunge Hu Beihang University, Zhe Zhang Beihang University, Xiang Gao Beihang University
14:40
10m
Talk
Towards Leveraging LLMs for Reducing Open Source Onboarding Information Overload
NIER Track
Elijah Kayode Adejumo George Mason University, Brittany Johnson George Mason University
14:50
10m
Talk
CoDefeater: Using LLMs To Find Defeaters in Assurance Cases
NIER Track
Usman Gohar Dept. of Computer Science, Iowa State University, Michael Hunter Iowa State University, Robyn Lutz Iowa State University, Myra Cohen Iowa State University