ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Tue 29 Oct 2024 16:00 - 16:15 at Carr - Performance and load

Within the realms of scientific computing, large-scale data processing, and artificial intelligence-powered computation, disparities in performance, which originate from differing code implementations, directly influence the practicality of the code. Although existing works tried to utilize code knowledge to enhance the execution performance of codes generated by large language models, they neglect code evaluation outcomes which directly refer to the code execution details, resulting in inefficient computation. To address this issue, we propose DSCT-Decode, an innovative adaptive decoding strategy for large language models, that employs a data structure named ‘Code Token Tree’ (CTT), which guides token selection based on code evaluation outcomes. DSCT-Decode assesses generated code across three dimensions—correctness, performance, and similarity—and utilizes a dynamic penalty-based boundary intersection method to compute multi-objective scores, which are then used to adjust the scores of nodes in the CTT during backpropagation. By maintaining a balance between exploration, through token selection probabilities, and exploitation, through multi-objective scoring, DSCT-Decode effectively navigates the code space to swiftly identify high-performance code solutions. To substantiate our framework, we developed a new benchmark, big-DS-1000, which is an extension of DS-1000. This benchmark is the first of its kind to specifically evaluate code generation methods based on execution performance. Comparative evaluations with leading large language models, such as CodeLlama and GPT-4, show that our framework achieves an average performance enhancement of nearly 30%. Furthermore, 30% of the codes exhibited a performance improvement of more than 20%, underscoring the effectiveness and potential of our framework for practical applications.

Tue 29 Oct

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 17:00
15:30
15m
Talk
AI-driven Java Performance Testing: Balancing Result Quality with Testing Time
Research Papers
Luca Traini University of L'Aquila, Federico Di Menna University of L'Aquila, Vittorio Cortellessa University of L'Aquila
DOI Pre-print
15:45
15m
Talk
MLOLET - Machine Learning Optimized Load and Endurance Testing: An industrial experience report
Industry Showcase
Arthur Vitui Concordia University, Tse-Hsun (Peter) Chen Concordia University
16:00
15m
Talk
Dynamic Scoring Code Token Tree: A Novel Decoding Strategy for Generating High-Performance Code
Research Papers
Muzi Qu University of Chinese Academy of Sciences, Jie Liu Institute of Software, Chinese Academy of Sciences, Liangyi Kang Institute of Software, Chinese Academy of Sciences, Shuai Wang Institute of Software, Chinese Academy of Sciences, Dan Ye Institute of Software, Chinese Academy of Sciences, Tao Huang Institute of Software at Chinese Academy of Sciences
16:15
10m
Talk
BenchCloud: A Platform for Scalable Performance Benchmarking
Tool Demonstrations
Dirk Beyer LMU Munich, Po-Chun Chien LMU Munich, Marek Jankola LMU Munich
DOI Pre-print Media Attached
16:25
10m
Talk
A Formal Treatment of Performance BugsRecorded Talk
NIER Track
Omar I. Al Bataineh Gran Sasso Science Institute (GSSI)