ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States

This program is tentative and subject to change.

Wed 30 Oct 2024 11:15 - 11:30 at Gardenia - Code generation 2

Code generation benchmarks such as HumanEval are widely adopted to evaluate LLMs’ capabilities. However, after consolidating the latest 24 benchmarks, we noticed three significant imbalances. First, imbalanced programming language. 95.8% of benchmarks involve Python, while only 5 benchmarks involve Java, resulting in an in- sufficient understanding of LLMs’ capability to generate Java code. Second, imbalanced code granularity. Function-/statement-level benchmarks account for over 83.3% of benchmarks. Only a mere hand- ful extends to class-/project-levels, and all are limited to Python. Third, lacking advanced features. Existing benchmarks primarily assess basic coding skills (e.g., variables, operators, and control struc- tures), while overlooking advanced Object-Oriented Programming (OOP) features (i.e., encapsulation, inheritance, and polymorphism). Considering the prevalence of these advanced features in real-world Java project development, constructing benchmarks to test LLMs on handling OOP features is necessary. To fill these gaps, we propose JavaBench, a project-level Java benchmark that exercises OOP features. It comprises four Java projects with 389 methods in 106 Java classes. The test coverage is up to 92%, and JavaBench is attested by 282 undergraduate students, reaching a 90.93/100 average score (i.e., pass rate against the test suite), ensuring the quality of documentation, code skeleton, and tests. To better evaluate LLM’s capability against JavaBench, we introduce a systematic evaluation design covering three context settings and five synthesis strategies at two granularities using three hierarchical metrics. Our extensive experiment yields several interesting findings. First, we noticed that regarding project-level Java programming, LLMs are far behind undergraduate stu- dents (no project can be correctly completed by any studied LLMs, and at most 41.17% Pass@5 in a more relaxed evaluation). Sec- ond, using method signature as prompt context may strike an ideal balance for project-level code generation. JavaBench is publicly available at https://github.com/java-bench/JavaBench. We also re- lease a leaderboard and invite model developers to participate and test their models against JavaBench at https://java-bench.github. io/leaderboard.html.

This program is tentative and subject to change.

Wed 30 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
10:30
15m
Talk
Preference-Guided Refactored Tuning for Retrieval Augmented Code Generation
Research Papers
Xinyu Gao , Yun Xiong Fudan University, Deze Wang National University of Defense Technology, Zhenhan Guan Fudan University, Zejian Shi Fudan University, Haofen Wang Tongji University, Shanshan Li National University of Defense Technology
Pre-print
10:45
15m
Talk
Sifting through the Chaff: On Utilizing Execution Feedback for Ranking the Generated Code Candidates
Research Papers
Zhihong Sun Shandong Normal University, Yao Wan Huazhong University of Science and Technology, Jia Li , Hongyu Zhang Chongqing University, Zhi Jin Peking University, Ge Li Peking University, Chen Lyu Shandong Normal University
11:00
15m
Talk
Promise and Peril of Collaborative Code Generation Models: Balancing Effectiveness and Memorization
Research Papers
Zhi Chen Singapore Management University, Lingxiao Jiang Singapore Management University
11:15
15m
Talk
JavaBench: A Benchmark of Object-Oriented Code Generation for Evaluating Large Language Models
Research Papers
Jialun Cao Hong Kong University of Science and Technology, Zhiyong Chen Nanjing University, Jiarong Wu The Hong Kong University of Science and Technology, Shing-Chi Cheung Hong Kong University of Science and Technology, Chang Xu Nanjing University
11:30
15m
Talk
PACGBI: A Pipeline for Automated Code Generation from Backlog Items
Tool Demonstrations
Mahja Sarschar Hochschule für Technik und Wirtschaft Berlin, Gefei Zhang HTW Berlin, Annika Nowak Capgemini
11:45
15m
Talk
Contextualized Data-Wrangling Code Generation in Computational Notebooks
Research Papers
Junjie Huang The Chinese University of Hong Kong, Daya Guo Sun-yat Sen University, Chenglong Wang Microsoft Research, Jiazhen Gu Chinese University of Hong Kong, Shuai Lu Microsoft Research, Jeevana Priya Inala Microsoft Research, Cong Yan Microsoft Research, Jianfeng Gao Microsoft Research, Nan Duan Microsoft Research, Michael Lyu The Chinese University of Hong Kong