Write a Blog >>
ASE 2020
Mon 21 - Fri 25 September 2020 Melbourne, Australia
Wed 23 Sep 2020 17:10 - 17:30 at Kangaroo - Software Engineering for AI (3) Chair(s): Iftekhar Ahmed

Deep learning (DL) training algorithms utilize nondeterminism to improve models’ accuracy and training efficiency. Hence, multiple identical training runs (e.g., identical training data, algorithm, and network) produce different models with different accuracy and training time. In addition to these algorithmic factors, DL libraries (e.g., TensorFlow and cuDNN) introduce additional variance(referred to as implementation-level variance) due to parallelism, optimization, and floating-point computation. This work is the first to study the variance of DL systems and the awareness of this variance among researchers and practitioners. Our experiments on three datasets with six popular networks show large overall accuracy differences among identical training runs. Even after excluding weak models, the accuracy difference is still 10.8%. In addition, implementation-level factors alone cause the accuracy difference across identical training runs to be up to 2.9%, the per-class accuracy difference to be up to 52.4%, and the training time to convergence difference to be up to 145.3%. All core(TensorFlow, CNTK, and Theano) and low-level libraries exhibit implementation-level variance across all evaluated versions. Our researcher and practitioner survey shows that 83.8% of the901 participants are unaware of or unsure about any implementation-level variance. In addition, our literature survey shows that only 19.5±3% of papers in recent top software engineering (SE), AI, and systems conferences use multiple identical training runs to quantify the variance of their DL approaches. This paper raises awareness of DL variance and directs SE researchers to challenging tasks such as creating deterministic DL libraries for debugging and improving the reproducibility of DL software and results.

Wed 23 Sep

Displayed time zone: (UTC) Coordinated Universal Time change

17:10 - 18:10
Software Engineering for AI (3)Research Papers / Tool Demonstrations at Kangaroo
Chair(s): Iftekhar Ahmed University of California at Irvine, USA
17:10
20m
Talk
Problems and Opportunities in Training Deep Learning Software Systems: An Analysis of VarianceACM Distinguished Paper
Research Papers
Hung Viet Pham University of Waterloo, Shangshu Qian Purdue University, Jiannan Wang Purdue University, Thibaud Lutellier University of Waterloo, Jonathan Rosenthal Purdue University, Lin Tan Purdue University, USA, Yaoliang Yu University of Waterloo, Nachiappan Nagappan Microsoft Research
Pre-print
17:30
20m
Talk
NeuroDiff: Scalable Differential Verification of Neural Networks using Fine-Grained Approximation
Research Papers
Brandon Paulsen University of Southern California, Jingbo Wang University of Southern California, Jiawei Wang University of Southern California, Chao Wang USC
Pre-print
17:50
10m
Talk
RepoSkillMiner: Identifying software expertise from GitHub repositories using Natural Language Processing
Tool Demonstrations
Efstratios Kourtzanidis University Of Macedonia, Alexander Chatzigeorgiou University of Macedonia, Apostolos Ampatzoglou University of Macedonia
Pre-print Media Attached File Attached