CAIN 2023
Mon 15 - Sat 20 May 2023 Melbourne, Australia
co-located with ICSE 2023
Mon 15 May 2023 19:40 - 20:00 at Virtual - Zoom for CAIN - Training & Learning Chair(s): Rrezarta Krasniqi

The success of machine learning (ML) models depends on careful experimentation and optimization of their hyperparameters. Tuning can affect the reliability and accuracy of a trained model and is the subject of ongoing research. However, little is known about whether and how researchers optimize the hyperarameters of their ML models. This not only limits the adoption of best practices for tuning in research, but also affects the reproducibility of published results. Our research systematically analyzes the use and tuning of hyperparameters in ML publications. For this, we analyze 2000 code repositories and their associated research papers from Papers with Code. We compare the use and tuning of hyperparameters of three widely used ML libraries—scikit-learn, TensorFlow, and PyTorch. Our results show that the most of the available hyperparameters are left untouched, and these that have been changed are using constant values. In particular, there is a big difference between tuning hyperparameters and reporting about it in the corresponding research papers. Our results suggest that there is a need for improved research and reporting practices in when using ML methods to improve the reproducibility of published results.

Mon 15 May

Displayed time zone: Hobart change

19:00 - 20:30
Training & LearningPapers at Virtual - Zoom for CAIN
Chair(s): Rrezarta Krasniqi University of North Texas

Click here to Join us over zoom

Click here to watch the session recording on YouTube

19:00
20m
Long-paper
Replay-Driven Continual Learning for the Industrial Internet of Things
Papers
Sagar Sen , Simon Myklebust Nielsen University of Oslo, Norway, Erik Johannes Husom SINTEF Digital, Arda Goknil SINTEF Digital, Simeon Tverdal SINTEF Digital, Leonardo Sastoque Pinilla Centro de Fabricación Avanzada Aeronáutica (CFAA)
19:20
20m
Long-paper
Towards Understanding Model Quantization for Reliable Deep Neural Network Deployment
Papers
Qiang Hu University of Luxembourg, Yuejun GUo University of Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Xiaofei Xie Singapore Management University, Wei Ma Nanyang Technological University, Singapore, Mike Papadakis University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg
19:40
20m
Long-paper
Exploring Hyperparameter Usage and Tuning in Machine Learning ResearchDistinguished paper Award Candidate
Papers
Sebastian Simon Leipzig University, Nikolay Kolyada , Christopher Akiki Leipzig University, Martin Potthast Leipzig University, Benno Stein Bauhaus-University Weimar, Norbert Siegmund Leipzig University
Pre-print
20:00
15m
Short-paper
An Initial Analysis of Repair and Side-effect Prediction for Neural Networks
Papers
Yuta Ishimoto Kyushu University, Ken Matsui Kyushu University, Masanari Kondo Kyushu University, Naoyasu Ubayashi Kyushu University, Yasutaka Kamei Kyushu University
Pre-print