ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States

This program is tentative and subject to change.

Wed 30 Oct 2024 10:30 - 10:45 at Magnoila - Testing 2

Selecting the best code solution from multiple generated ones is an essential task in code generation, which can be achieved by using some reliable validators (e.g., developer-written test cases) for assistance. Since reliable test cases are not always available and can be expensive to build in practice, researchers propose to automatically generate test cases to assess code solutions. However, when both code solutions and test cases are plausible and not reliable, selecting the best solution becomes challenging. Although some heuristic strategies have been proposed to tackle this problem, they lack a strong theoretical guarantee and it is still an open question whether an optimal selection strategy exists. Our work contributes in two ways. First, we show that within a Bayesian framework, the optimal selection strategy can be defined based on the posterior probability of the observed passing states between solutions and tests. The problem of identifying the best solution is then framed as an integer programming problem. Second, we propose an efficient approach for approximating this optimal (yet uncomputable) strategy, where the approximation error is bounded by the correctness of prior knowledge. We then incorporate effective prior knowledge to tailor code generation tasks. Both theoretical and empirical studies confirm that existing heuristics are limited in selecting the best solutions with plausible test cases. Our proposed approximated optimal strategy B4 significantly surpasses existing heuristics in selecting code solutions generated by large language models (LLMs) with LLM-generated tests, achieving a relative performance improvement by up to 50% over the strongest heuristic and 246% over the random selection in the most challenging scenarios.

This program is tentative and subject to change.

Wed 30 Oct

Displayed time zone: Pacific Time (US & Canada) change

10:30 - 12:00
10:30
15m
Talk
B4: Towards Optimal Assessment of Plausible Code Solutions with Plausible Tests
Research Papers
Mouxiang Chen Zhejiang University, Zhongxin Liu Zhejiang University, He Tao Zhejiang University, Yusu Hong Zhejiang University, David Lo Singapore Management University, Xin Xia Huawei, JianLing Sun Zhejiang University
10:45
15m
Talk
Reducing Test Runtime by Transforming Test Fixtures
Research Papers
Chengpeng Li University of Texas at Austin, Abdelrahman Baz The University of Texas at Austin, August Shi The University of Texas at Austin
11:00
15m
Talk
Efficient Incremental Code Coverage Analysis for Regression Test Suites
Research Papers
Jiale Amber Wang University of Waterloo, Kaiyuan Wang Google, Pengyu Nie University of Waterloo
11:15
15m
Talk
Combining Coverage and Expert Features with Semantic Representation for Coincidental Correctness Detection
Research Papers
Huan Xie Chongqing University, Yan Lei Chongqing University, Maojin Li Chongqing University, Meng Yan Chongqing University, Sheng Zhang Chongqing University
11:30
15m
Talk
A Combinatorial Testing Approach to Surrogate Model Construction
Research Papers
Sunny Shree The University of Texas at Arlington, Krishna Khadka The University of Texas at Arlington, Jeff Yu Lei University of Texas at Arlington, Raghu Kacker National Institute of Standards and Technology, D. Richard Kuhn National Institute of Standards and Technology
11:45
15m
Talk
The Importance of Accounting for Execution Failures when Predicting Test Flakiness
Industry Showcase
Guillaume Haben University of Luxembourg, Sarra Habchi Ubisoft Montréal, John Micco VMware, Mark Harman Meta Platforms, Inc. and UCL, Mike Papadakis University of Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg