A code summary is a brief natural language description of source code. Summaries are usually only a single sentence long, and yet form the backbone of developer documentation. A short descriptions such as “changes all visible polygons to the color blue” can give a programmer a high-level idea of what code does without the effort of reading the code itself. Recently, products based on Large Language Models such as ChatGPT have demonstrated a strong ability to write these descriptions automatically. However, to use these tools, programmers must send their code to untrusted third parties for processing (e.g., via an API call). This loss of custody is not acceptable to many organizations. In this paper, we present an alternative: we train an open source model using sample output generated by GPT-3.5 in a process related to knowledge distillation. Our model is small enough (350m parameters) to be run on a single 16gb GPU, yet we show in our evaluation that it is large enough to mimic GPT-3.5 on this task.
Tue 29 OctDisplayed time zone: Pacific Time (US & Canada) change
13:30 - 15:00 | LLM for SE 1Research Papers / NIER Track / Tool Demonstrations / Journal-first Papers at Camellia Chair(s): Chengcheng Wan East China Normal University | ||
13:30 15mTalk | How Effective Do Code Language Models Understand Poor-Readability Code? Research Papers Chao Hu Shanghai Jiao Tong University, Yitian Chai School of Software, Shanghai Jiao Tong University, Hao Zhou Pattern, Recognition Center, WeChat, Tencent, Fandong Meng WeChat AI, Tencent, Jie Zhou Tencent, Xiaodong Gu Shanghai Jiao Tong University | ||
13:45 15mTalk | An Empirical Study to Evaluate AIGC Detectors on Code Content Research Papers Jian Wang Nanyang Technological University, Shangqing Liu Nanyang Technological University, Xiaofei Xie Singapore Management University, Yi Li Nanyang Technological University Pre-print | ||
14:00 15mTalk | Distilled GPT for source code summarization Journal-first Papers | ||
14:15 15mTalk | Leveraging Large Language Model to Assist Detecting Rust Code Comment Inconsistency Research Papers Zhang Yichi , Zixi Liu Nanjing University, Yang Feng Nanjing University, Baowen Xu Nanjing University | ||
14:30 10mTalk | LLM-Based Java Concurrent Program to ArkTS Converter Tool Demonstrations Runlin Liu Beihang University, Yuhang Lin Zhejiang University, Yunge Hu Beihang University, Zhe Zhang Beihang University, Xiang Gao Beihang University | ||
14:40 10mTalk | Towards Leveraging LLMs for Reducing Open Source Onboarding Information Overload NIER Track | ||
14:50 10mTalk | CoDefeater: Using LLMs To Find Defeaters in Assurance Cases NIER Track Usman Gohar Dept. of Computer Science, Iowa State University, Michael Hunter Iowa State University, Robyn Lutz Iowa State University, Myra Cohen Iowa State University |