CAIN 2023
Mon 15 - Sat 20 May 2023 Melbourne, Australia
co-located with ICSE 2023
Mon 15 May 2023 19:20 - 19:40 at Virtual - Zoom for CAIN - Training & Learning Chair(s): Rrezarta Krasniqi

Deep Neural Networks (DNNs) have gained considerable attention in the past decades due to their astounding performance in different applications, such as natural language modeling, self-driving assistance, and source code understanding. With rapid exploration, more and more complex DNN architectures have been proposed along with huge pre-trained model parameters. A common way to use such DNN models in user-friendly devices (e.g., mobile phones) is to perform model compression before deployment. However, recent research has demonstrated that model compression, e.g., model quantization, yields accuracy degradation as well as output disagreements when tested on unseen data. Since the unseen data always include distribution shifts and often appear in the wild, the quality and reliability of models after quantization are not ensured. In this paper, we conduct a comprehensive study to characterize and help users understand the behaviors of quantization models. Our study considers four datasets spanning from image to text, eight DNN architectures including both feed-forward neural networks and recurrent neural networks, and 42 shifted sets with both synthetic and natural distribution shifts. The results reveal that 1) data with distribution shifts happen more disagreements than without. 2) Quantization-aware training can produce more stable models than standard, adversarial, and Mixup training. 3) Disagreements often have closer top-1 and top-2 output probabilities, and $Margin$ is a better indicator than other uncertainty metrics to distinguish disagreements. 4) Retraining the model with disagreements has limited efficiency in removing disagreements. We release our code and models as a new benchmark for further study of model quantization.

Mon 15 May

Displayed time zone: Hobart change

19:00 - 20:30
Training & LearningPapers at Virtual - Zoom for CAIN
Chair(s): Rrezarta Krasniqi University of North Texas

Click here to Join us over zoom

Click here to watch the session recording on YouTube

19:00
20m
Long-paper
Replay-Driven Continual Learning for the Industrial Internet of Things
Papers
Sagar Sen , Simon Myklebust Nielsen University of Oslo, Norway, Erik Johannes Husom SINTEF Digital, Arda Goknil SINTEF Digital, Simeon Tverdal SINTEF Digital, Leonardo Sastoque Pinilla Centro de Fabricación Avanzada Aeronáutica (CFAA)
19:20
20m
Long-paper
Towards Understanding Model Quantization for Reliable Deep Neural Network Deployment
Papers
Qiang Hu University of Luxembourg, Yuejun GUo University of Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Xiaofei Xie Singapore Management University, Wei Ma Nanyang Technological University, Singapore, Mike Papadakis University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg
19:40
20m
Long-paper
Exploring Hyperparameter Usage and Tuning in Machine Learning ResearchDistinguished paper Award Candidate
Papers
Sebastian Simon Leipzig University, Nikolay Kolyada , Christopher Akiki Leipzig University, Martin Potthast Leipzig University, Benno Stein Bauhaus-University Weimar, Norbert Siegmund Leipzig University
Pre-print
20:00
15m
Short-paper
An Initial Analysis of Repair and Side-effect Prediction for Neural Networks
Papers
Yuta Ishimoto Kyushu University, Ken Matsui Kyushu University, Masanari Kondo Kyushu University, Naoyasu Ubayashi Kyushu University, Yasutaka Kamei Kyushu University
Pre-print