ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States
Tue 29 Oct 2024 15:45 - 16:00 at Carr - Performance and load

Load testing is essential for ensuring the performance and stability of modern large-scale systems, which must handle vast numbers of concurrent requests. Traditional load tests, often requiring extensive execution times, are costly and impractical within the short release cycles typical of contemporary software development. In this paper, we present our experience deploying MLOLET, a machine learning optimized load testing framework, at Ericsson. MLOLET addresses key challenges in load testing by determining early stop points for tests and forecasting throughput and response time trends in production environments. By training a time-series model on key performance indicators (KPIs) collected from load tests, MLOLET enables early detection of abnormal system behavior and provides accurate performance forecasting. This capability allows load test engineers to make informed decisions on resource allocation, enhancing both testing efficiency and system reliability. We document the design of MLOLET, its application in industrial settings, and the feedback received from its implementation, highlighting its impact on improving load testing processes and operational performance.

Tue 29 Oct

Displayed time zone: Pacific Time (US & Canada) change

15:30 - 17:00
15:30
15m
Talk
AI-driven Java Performance Testing: Balancing Result Quality with Testing Time
Research Papers
Luca Traini University of L'Aquila, Federico Di Menna University of L'Aquila, Vittorio Cortellessa University of L'Aquila
DOI Pre-print
15:45
15m
Talk
MLOLET - Machine Learning Optimized Load and Endurance Testing: An industrial experience report
Industry Showcase
Arthur Vitui Concordia University, Tse-Hsun (Peter) Chen Concordia University
16:00
15m
Talk
Dynamic Scoring Code Token Tree: A Novel Decoding Strategy for Generating High-Performance Code
Research Papers
Muzi Qu University of Chinese Academy of Sciences, Jie Liu Institute of Software, Chinese Academy of Sciences, Liangyi Kang Institute of Software, Chinese Academy of Sciences, Shuai Wang Institute of Software, Chinese Academy of Sciences, Dan Ye Institute of Software, Chinese Academy of Sciences, Tao Huang Institute of Software at Chinese Academy of Sciences
16:15
10m
Talk
BenchCloud: A Platform for Scalable Performance Benchmarking
Tool Demonstrations
Dirk Beyer LMU Munich, Po-Chun Chien LMU Munich, Marek Jankola LMU Munich
DOI Pre-print Media Attached
16:25
10m
Talk
A Formal Treatment of Performance BugsRecorded Talk
NIER Track
Omar I. Al Bataineh Gran Sasso Science Institute (GSSI)