A Combinatorial Testing Approach to Hyperparameter OptimizationDistinguished paper Award Candidate
In machine learning, hyperparameter optimization (HPO) is essential for effective model training and significantly impacts model performance. Hyperparameters are predefined model settings which fine-tune the model’s behavior and are critical to modeling complex data pattern. Traditional HPO approaches such as Grid Search, Random Search, and Bayesian Optimization have been the widely used in this field. However, as datasets grow and models increase in complexity, these approaches often require more time and resource for HPO. This research introduces a novel approach using $t$-way testing—a combinatorial approach to software testing used for identifying faults with minimal test cases—for HPO. $t$-way testing ensures the coverage of all possible combinations of ‘t’ selected parameter values from a total of ‘n’ parameters. $t$-way testing substantially narrows the search space and effectively covers parameter interactions. We hypothesize that this technique will provide a more resource-efficient approach to HPO. Our experimental results show that our approach reduces the number of necessary model evaluations and significantly cuts computational expenses while still outperforming traditional HPO approaches.
Mon 15 AprDisplayed time zone: Lisbon change
14:00 - 15:30 | |||
14:00 15mTalk | A Combinatorial Testing Approach to Hyperparameter OptimizationDistinguished paper Award Candidate Research and Experience Papers Krishna Khadka The University of Texas at Arlington, Jaganmohan Chandrasekaran Virginia Tech, Jeff Yu Lei University of Texas at Arlington, Raghu Kacker National Institute of Standards and Technology, D. Richard Kuhn National Institute of Standards and Technology | ||
14:15 15mTalk | Mutation-based Consistency Testing for Evaluating the Code Understanding Capability of LLMs Research and Experience Papers | ||
14:30 10mTalk | LLMs for Test Input Generation for Semantic Applications Research and Experience Papers Zafaryab Rasool Applied Artificial Intelligence Institute, Deakin University, Scott Barnett Applied Artificial Intelligence Institute, Deakin University, David Willie Applied Artificial Intelligence Institute, Deakin University, Stefanus Kurniawan Deakin University, Sherwin Balugo Applied Artificial Intelligence Institute, Deakin University, Srikanth Thudumu Deakin University, Mohamed Abdelrazek Deakin University, Australia | ||
14:40 10mTalk | (Why) Is My Prompt Getting Worse? Rethinking Regression Testing for Evolving LLM APIs Research and Experience Papers MA Wanqin The Hong Kong University of Science and Technology, Chenyang Yang Carnegie Mellon University, Christian Kästner Carnegie Mellon University | ||
14:50 10mTalk | Welcome Your New AI Teammate: On Safety Analysis by Leashing Large Language Models Research and Experience Papers Ali Nouri Volvo cars & Chalmers University of Technology, Beatriz Cabrero-Daniel University of Gothenburg, Fredrik Torner Volvo cars, Hakan Sivencrona Zenseact AB, Christian Berger Chalmers University of Technology, Sweden | ||
15:00 10mTalk | ML-On-Rails: Safeguarding Machine Learning Models in Software Systems – A Case Study Research and Experience Papers Hala Abdelkader Applied Artificial Intelligence Institute, Deakin University, Mohamed Abdelrazek Deakin University, Australia, Scott Barnett Applied Artificial Intelligence Institute, Deakin University, Jean-Guy Schneider Monash University, Priya Rani RMIT University, Rajesh Vasa Deakin University, Australia | ||
15:10 20mLive Q&A | Test - Q&A Session Research and Experience Papers |